I started off with a story. FAIL. Here's the plan:
- save the blob to a key-store using a UUID as it's key. The key-store should support a TTL with callback.
- put the UUID in the MQ
- The MQ is always going forward
- the transaction is only every touched by one service at a time
- If there is a reason to fork the transaction... then each service should replace the UUID and insert a new blob in the key-store.
It is my experience that most services in a transaction only effect a portion a portion of the blob at a time and so copying around is costly.
**Redis has some primitives that allow you to have hashes of array such that you can append to an array as the transaction moves around the system instead of trying to log-aggregate the events after the fact.