Now that we’ve modified our Redux store to support middleware, we have the foundation needed to integrate speech synthesis into the main flow of our application.
Leveraging the ISpeechService abstraction we previously defined, the implementation falls out relatively easily, though (as we’ll see) there are a few wrinkles that will need to be followed up.
The constructor for SpeechMiddleware
predictably accepts an ISpeechService
middleware reference, but why does it also need a reference to our store? This is so that we can asynchronously dispatch a SpeechFinishedMessage
when playback completes.
Our implementation of Dispatch()
reacts to a SpeakMessage
, passing all messages through to the next handler in the chain.
This async void method fits the usual pattern for event dispatch, which makes sense given that we’re reacting to a message representing a particular application event.
If you run the code at this point, you’ll start getting exceptions fired within the Dispatch()
method of the ReduxStore
, triggered by this check:
What’s happening is that our middleware is trying to dispatch a second message while we’re still processing the first. Since actual reentrancy of our Redux store would result in loss of updates, we need to add support to serialize the processing of these messages (that is, to process them sequentially).
To fix this, we add a queue to buffer messages as they are received, delaying dispatch until we finish processing the current message. The trickiest piece of the implementation is the locking required to ensure multithreaded operation.
We queue the message requiring dispatch onto our internal queue. At this point, we have one of two cases.
If we’re already dispatching messages (_dispatching
is true), then we don’t want to process this message ourselves. If it’s false, then we need to process the message we have, as long as any extra message it provokes.
The key to multithreaded safety is based on every manipulation of the queue and change to _dispatching
happening within the control of the same lock.
Now we can work on hooking up speech generation with our maintenance screen. As well as being useful, this will be an easy way to prove that it works properly. Given the length of this post, that’s something to address next time.
Comments
blog comments powered by Disqus