We've confirmed that all
systems are back to normal with no customer impact as of 7/20, 00:24 UTC. Our
logs show the incident started on 7/13, 7:58 UTC and ended 7/19 21:21 UTC in EJP
and began 7/17 01:53 UTC and ended 7/20
00:24 in EUS and that during the time it took to
resolve the issue customers experienced both data latency and data loss in Japan
East and East US for Service Map and VM Insights. This could have caused errors
when viewing network dependency maps or caused data latency and loss on
VMComputer, VMProcess, VMBoundPort and VMConnection tables in their Log
Analytics Workspaces. This latency and loss could have also caused missed or
misfired alerts if you are using the tables listed.
We understand that customers rely on Service Map and VM Insights as a critical service and apologize for any impact this incident caused.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.