Satalyst Brilliance on 28 Jul 2015

Azure Continuous Integration Monitoring Part 2

Satalyst Innovation

Azure powered multi-stage CI monitoring tools

This is the second in a series of 3 posts covering something fun we’ve been messing around with her at Satalyst in our innovation time. Part 1 includes a little about what we’re trying to do, part 2 describes the infrastructure we built on Azure to do all the heavy lifting, and part 3 <link to part 3> shows the finished shiny and wraps things up.

In the previous post, I described the idea of a CD pipeline based on dependent jobs (or “builds” in typical CI parlance), and how some folk at Satalyst decided to have some fun making our very own blinkenlights.

It all started with an idea to build a build light that could accurately show the state of an entire CD pipeline. Then it got bigger…

Satalyst CTO decided things needed to be more, well, ‘enterprisey’, so we added (wait for it):

  • A Jabber/XMPP room as the notification channel,
  • An Azure Cloud Services Worker Role (the XMPP Converter in the diagram below) to pick up the messages from the Jabber/XMPP room, convert the messages, and publish them to…
  • An Azure Service Bus Queue that holds the incoming XMPP notifications, where they are consumed by…
  • Another Azure Cloud Services Worker Role (the Deployment Monitor in the diagram below) that turns the stream of notifications into a coherent set of status update messages, which are published to…
  • An Azure Service Bus Topic whose purpose is to make the event stream readily available to consumers, such as…
  • The Raspberry Pi powered LED board that shows the status of a whole pipeline.
  • In the mix is also an Azure DocumentDB that stores all the data so it can be accessed by other tools that need a historical record.

…phew…

Anthony's FedEx project

A later addition that’s not shown in this diagram is a web app that consumes historical records from the DocumentDB and presents a nice interactive dashboard to look at the history of the various build pipelines being tracked.

Reading the diagram roughly from right to left and following the flow of information;

Jabber/XMPP was chosen as the notification channel because a) it’s easy, and b) it’s supported out of the box by TeamCity. Octopus Deploy doesn’t nicely XMPP notifications, but since we wrap the Octo jobs in TC jobs, and Octo talks nicely back to the TC job that initiated it, we haven’t really lost anything; if the Octo job fails, then TC will fail the TC job that initiated the Octo job.

XMPP Converter is an Azure Web Services Worker Role that listens to XMPP notifications from TC and pushes them onto an Azure Service Bus Queue. It doesn’t really do anything except connect the XMPP room to the Azure queue.

XMPP Message Queue is the Azure Service Bus Queue that holds incoming notifications until they can be processed by the Deployment Monitor. The messages in this queue are basically the bare build notification messages straight out of TeamCity. The information in those messages are not in a form that’s particularly useful, so this queue isn’t intended for consumption by external systems.

Deployment Monitor is another Cloud Services Worker Role. This one has all the smarts in it to correlate the TC notifications and combine them into messages that describe the state of the entire pipeline. When a notification for a single TC job comes in, this worker role will use previous messages stored in the DocumentDB along with the incoming message to work out which pipeline the message relates to, update the pipeline status, write the status back to the DocumentDB, and publish a new notification to the Pipeline Status Topic.

Pipeline Status Topic is an Azure Service Bus Topic that any other device or service can subscribe to get status updates on the pipelines being monitored. These status messages contain the status of the whole pipeline, which makes it really easy to consume these messages on the Pi and update the LEDs. Because all the smarts are in the Deployment Monitor worker role, there’s no need for complex processing logic on the Pi, or whatever other topic subscriber we might want to implement.

Finally (phew!), the Raspberry Pi runs some trivially simple code that subscribes to the Service Bus Topic, and when any new status message comes in, reads the state of the pipeline out of the message, and sets the state of the LEDs accordingly.

Easy!

Read on in the next instalment to see the finished product!

Categories:
Tags: