Our online services are experiencing issues.
Being on Azure, we have received the following
SUMMARY OF IMPACT: Between 16:05 and approximately 23:45 UTC on 9th April, 2016 a subset of customers using Azure Search, Cloud Services, Event Hubs, HDInsight, Log Analytics, Managed Cache Service, Redis Cache, RemoteApp, Service Bus, SQL Database, Storage, Stream Analytics, Virtual Machines, Visual Studio Application Insights, and Web Apps in East US may have experienced connection failures or long latencies when attempting to access their resources. Backend processing for Azure Search, Application Insights, and Operational Insights is ongoing and customers may experience delays in the processing of data for up to 8 hours after mitigation of the incident. Additionally, a small subset of Virtual Machines were not recovered by automated mitigation. Impacted customers that are still experiencing issues with their Virtual Machines are advised to visit the MSDN article http://aka.ms/vmrecovery. We have also provided free support to this subset of customers to assist with Virtual Machine recovery as needed. PRELIMINARY ROOT CAUSE: Engineers identified memory resource contention on front end storage servers that support a single storage deployment unit (stamp) in East US. MITIGATION: Engineers stopped the backend process that was consuming the available memory, expanded the available memory, and then restarted the nodes to restore availability. NEXT STEPS: Investigate the underlying cause of the memory resource contention and implement corrective measures to prevent a recurrence.
We feel that the cause is independent from the description here, no other information was given to us.