January 11, 2013 at 00:00
START DATE: 01/13/2013
START TIME: 9:00PM PDT
TYPE OF WORK: Software updates
PURPOSE OF WORK: Performance and stability
IMPACT OF WORK: Power cycle of the device
We will be performing software updates on the vz013.dc4.sea hardware node Sunday Jan-13-2013 at 9:00PM PDT. There will be approximately 10-15 minutes of downtime or less when the device is power cycled as containers come back online.
Update 9:08PM PDT: Services have since recovered, and containers now back online. This maintenance window has been completed.
January 04, 2013 at 10:36
At approximately 10:10AM PDT we detected network related problems for our Seattle network, while matters are resolved at this time we continue to investigate the cause and duration of the event. We will keep this post updated as we have additional information to provide.
Update 10:40AM PDT: Our upstream aggregator Internap is working on further upstream routing issues within their network, we continue to work with them for a root cause explanation on the packet loss in our Seattle network.
Update 12:12PM PDT: Our bandwidth aggregator Internap located and addressed a DDOS attack on their network at approximately 10:52 PST. Due to the scale of the attack network service providers in our mesh experienced periods of increased utilization and caused some packet loss and latency within our Seattle datacenter. Both Network Redux and Internap will continue to monitor our networks and update this thread should additional issues arise related to this incident.
December 19, 2012 at 13:15
We are currently experiencing issues with the chassis xen002.cl1, we are working towards resolution and will update once resolved.
Update 1:27PM PST: While xen002.cl1 is in failure mode we are having to force restart the VMs which were running on this chassis. This is being done now in parallel to our work on this cluster member.
Update 1:52PM PST: EVS clients from xen002.cl1 impacted by this have been manually pushed to other hardware nodes at this time, we continue to review the incident.
Update 2:29PM PST: No root cause has been identified however we believe the source of the problem relates to network connectivity from this server to the pool master being lost. At this time we will keep this physical server from hosting client EVS' until we have more concrete information on the cause. This item is being marked as resolved as all client EVS' impacted were pushed to other hardware nodes as of the previous update.
November 24, 2012 at 14:05
We are currently experiencing network connectivity issues with Internap in Seattle and are working with their engineers to bring resolution. There is no current ETA on service restoration and not all customers in Seattle are being impacted by this.
Update 2:45PM PST: All network services are restored, we continue to investigate this event and will keep this thread updated.
Update 3:45PM PST: We continue to work with our routing vendor to determine a root cause for the failure. We are also continuing to parse through debug information related to the incident as only a portion of our network was impacted by the event.
Update 4:45PM PST: We continue to evaluate the failure which occurred on core router cr-0-1.dc4.sea.networkredux.net. Preliminary analysis shows that the aggregate ethernet port-channel between cr-0-1.dc4.sea.networkredux.net and cr-0-2.dc4.sea.networkredux.net flapped in combination with the upstream BGP link. This prevented cr-0-1 from communicating via BGP or OSPF to neighboring routers.
Clients which experienced downtime during this event are clients with SRX security appliances who had VRRP master alignment with cr-0-1. In turn the SRX units did not failover redundancy groups because link states on cr-0-1 were still active.
November 22, 2012 at 07:15
We are currently reviewing an issue with the v001.dc4.sea chassis. We will update this message once resolution has taken place.
Update 7:45AM PST: The issues were traced to a single virtual server experiencing a denial of service attack. The hardware node has been restored to service.