<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Awesome, thanks!<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Jun 13, 2022, at 07:30, Daniel Stone <<a href="mailto:daniel@fooishbar.org" class="">daniel@fooishbar.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hi,<br class=""><br class="">On Mon, 13 Jun 2022 at 08:39, Daniel Stone <<a href="mailto:daniel@fooishbar.org" class="">daniel@fooishbar.org</a>> wrote:<br class=""><blockquote type="cite" class="">Yes, that's what's happening. Our (multi-host-replicated etc) Ceph<br class="">storage setup has entered a degraded mode due to the loss of a couple<br class="">of disks - no data has been lost but the cluster is currently unhappy.<br class="">We're walking through fixing this, but have bumped into some other<br class="">issues since, including a newly-flaky network setup, and changes since<br class="">we last provisioned a new storage host.<br class=""><br class="">We're working through them one by one and will have the service back<br class="">up with all our data intact - hopefully in a matter of hours but we<br class="">have no firm ETA right now.<br class=""></blockquote><br class="">Thanks mainly to Ben, everything is back up and running now.<br class=""><br class="">Cheers,<br class="">Daniel<br class=""><br class=""></div></div></blockquote></div><br class=""></body></html>