Jokes

Jokes on the topic of Second Life. Nuff said.

Linden Lightbulb Re-Deploy Post Mortem

This one was my contribution to Nicholaz Beresford’s call for Linden lightbulb jokes

The Linden Lightbulb 1.18.5 release included updates for several systems, including new carbon filament libraries, alloy couplings (a piece of infrastructure which handles a variety of services, such as local fixation and capabilities, and proxies current between systems), and glass geometry. The deploy as planned for November 6th did not require any downtime – all components could be updated live. We planned to perform the rollout per our patch deploy sequences: updating central rooms one by one, then offices. Read on for the day-by-day, blow-by-blow sequence of events which followed…

Tuesday, November 6th

Prior to the 1.18.5 Lightbulb deploy, at around midnight (all times are Pacific Standard Time) we suffered an electricity outage to our restroom facilities, which caused many systems to drop offline. The system recovered on its own after about an hour, and our electricity provider’s initial investigation pointed to hardware issues with the network infrastructure.

Starting at 10:00am we began the actual update of the lighting fixtures to the Linden 1.18.5 Lightbulbs. We started by updating the “backbone” fixtures on central facilities one by one, such as hall areas, tackling the “non risky” fixtures first. At 11:00am we got to the “risky” fixtures, which handle emergency lighting (i.e. show the way in case of evacuation) as well as several other key services. Closely monitoring the load on the electrical grid (which usually shows increased load when something goes wrong) as well as internal graphs which closely track the number of appliances online, we started making updates. Everything seemed to be going well.

Towards about 11:15am the various internal communication channels lit up with reports of appliance failures. We stopped updates of these central systems (7/8ths of the way through) and started to gather data. We have seen this problem in the past when hardware issues or bugs caused the grid monitoring systems to spin out of control, but this time there were no obvious failures; for unknown reasons they grid wasn’t responding to requests from the appliances. Hoping for a quick-fix (i.e. a simple configuration change that could be applied live) we spent about 30 minutes trying to determine the cause, then gave up and rolled back to the previous lightbulb generation.

(Fortunately, in this case, a rollback was straightforward, and simply resulted in “unknown” lighting status for about 10 minutes. Rollbacks are not always so easy – see below!)

Simultaneously, lighting in developer cubicles and coffee rooms failed. These were due to the update as well (but, as it turned out, for different reasons). Once the dust had settled on the rollback it was easy to roll back one more set of fixtures to restore the lights.

Completely unrelated to the update, the electrical load on the central systems required us to pause the Tuesday stipend payouts, delaying the payouts for several hours.

Wednesday, November 7th

Several Lindens continued the investigation, and determined a source of the issues seen on Tuesday: the “emergency lighting” system was updated to use eolian and solar sources to increase performance, but the capacity of these sources was set too low. After some work, we were able to replicate this failure in test environments to verify the fix. The updated bulbs were re-distributed to the fixtures making up the service, and we prepared to try again on Thursday.

(Little did we know that the insufficient electrical capacity was merely a symptom, not the root cause.)

Thursday, November 8th

On Thursday, we proceeded with the 1.18.5 Lightbulb update. The first half of the central fixtures were updated by 12:00pm. We paused to ensure that the system was behaving as expected, then continued at about 12:30pm completing the updates. Shortly thereafter, as the number of online lights in the building passed 46,000, the lighting began failing in a new way. Although most of Linden Lab was functioning properly, many light fixtures were slow to go on or failed to light altogether, and some other appliances failed as well. We diagnosed the problem as an unrecognized dependency – the central transformers were assuming that the fuses would shutdown on overload, but the fuse circuits (which had not yet been updated) were assuming the transformers would throttle down instead. Once this root cause was identified (by about 2:15pm) we were able to change the breaker code in the central transformers’ controllers to resume throttling current consumption, since that was a faster fix. Restarting the transformers did cause employees to sit in the dark for a short period of time, which was unexpected (and is being investigated). Starting after 3pm we initiated a rolling restart to update the electrical grid as well to complete the update, a process which took about 5 hours. During a rolling restart, in order to reduce electricity consumption and load on central systems, the service is in an unusual state – employees are not allowed to put lights or appliances back on in case of a crash. There was anecdotal evidence that some floors were crashing a lot, but we were unable to verify that this was not simply due to bad hardware until after the process was complete.

After the post-roll cleanup, it became clear that this was not an anomaly. A few contingency plans were discussed, including rollbacks for specific floors, but we were primarily in a data-gathering phase.

Friday, November 9th

As sleepy Lindens stumbled back into work, one incorrect (but ostensibly harmless) idea was tried; unfortunately, due to a typo, this accidentally knocked many employees off the electrical grid entirely around 9:40am. Shortly thereafter, more testing including complete rollbacks on simulator offices showed that the new transformer controller code was indeed the culprit, but it took a while longer to identify the cause. By 12:00pm the investigation had turned up a likely candidate – and an indication that a simple widespread rollback of the code would not, in fact, be safe or easy!

The crashing was caused by the transformer “message queue” getting backed up. A server-to-viewer message (related to the grid emergency control system) was updated and changed to move over TCP (reliable, but costly) instead of UDP (unreliable, but cheap and fast). On floors with many appliances and lights, this would cause the grid to become backed up (storing the “reliability” data) and eventually crash. We have a switchboard that allows us to toggle individual messages from TCP to UDP on the fly, but while testing we discovered a second issue – another circuit necessary for the UDP channel needed to be updated, and it could not be changed on the fly, and if we flipped the switch back from TCP to UDP the transformer would crash. (The TCP to UDP update on-the-fly worked, which is how we were able to do the rolling restart in the first place.)

By testing on individual floors, we were able to confirm that by switching back to UDP the problem was eliminated, although this required cutting off all electrical current before throwing the switch. We co-opted an existing engineer for “host-based” rolling restarts (which he had been employed for once in the past), and had him shut down offices on each floor (doing several in parallel), update the breaker circuits, and restart the transformers. After significant testing, we asked this engineer to perform another rolling restart of the service, which was completed by 11pm on Friday, including subsequent cleanup.

Saturday, November 10th

Unrelated to the deploy (but included here to clear up any confusion), on Saturday at 5:20pm we suffered another electrical outage, which resulted in hundreds of developers being offline for just under two hours. The cause was due to the expiration of a contract renewal term with our electricity provider. We extended the contract, and our DNOC team brought the affected floors back up.

What Have We Learned

Readers with technical backgrounds have probably said “Well, duh…” while reading the above transcription. There are obviously many improvements that can be made to our tools and processes to prevent at least some of these issues from occurring in the future. (And we’re hiring operations and release engineers and developers worldwide, so if you want to be a part of that future, head on over to the Linden Lab Employment page)

Here are a few of the take-aways:

  • Our load testing of systems is insufficient to catch many issues before they are deployed. Although we have talked about janitors and in-house technicians as a way to roll out changes to a small number of offices to find issues before they are widely deployed, this will not allow us to catch problems on central systems. We need better monitoring and reporting; our reliability track record is such that even problem such as electricity failures for 1/16th of employees aren’t noted for a significant period of time.
  • When problems are detected, we don’t do a good enough job internally in communicating what changes went into each release at the level of detail necessary for first responders to be most effective.
  • Our end-to-end deployment process takes long enough that responding to issues caused during the rollout is problematic.
  • Our tools for managing deploys have not kept pace with the scale of the service, and manual processes are error prone.
  • Track date-driven work (e.g. contract renewal expiry) more closely; build pre-emptive alerts into the system if possible.
  • Be more skeptical about doing updates while the office is live, especially when involving third-party providers.

Bookmark this page…

3 thoughts on “Jokes

Leave a comment