Update at 22:36:46(BST): The issue has now been resolved with this server.The upgrade went as expected and all servers are checking out fine after the completed work. The basic systems now have double RAM (20>40GB) which should vastly improve performance.
We will continue to monitor system uptime and performance but expect things to dramatically improve from here on out.
Update at 22:16:24(BST): Upgrade is now complete and the servers are rebooting and being cross checked.
Update at 21:56:20(BST): Our engineer is now on site and starting the upgrade. PH20-22 will now go offline for 15-30 minutes.
Update at 21:24:13(BST): Several tweaks have improved stability through the evening and we also expect our engineer to be on site within the next 30 minute to commence the final upgrade.
Update at 20:27:31(BST): Due to the increasing instability of the platform, we have escalated the emergency work.
We are presently arranging for an engineer to head to the datacenter and begin the upgrade this evening (Apr 16th). More updates as this unfolds, but we expect to restore full availability this evening.
We have identified a load issue on this trio of servers.
We will be monitoring and maintaining these systems best we can through this evening (Apr 16th).
During the morning of April 17th we intend to complete an emergency upgrade on this system which will vastly increase system resources.
This should correct the issue long term.
16th April 2013, at 6:36PM
16th April 2013, at 9:36PM
Update at 02:29:14(BST): Update at 01:50:57(BST): All servers should now be back online that have been affected by this event.
Update at 01:20:21(BST): We have identified the area of our network generating the issue and are presently isolating the cause. Most systems will now be fine, but some may still be slowed.
We are presently examining a slow down of several systems on our network.
16th April 2013, at 12:21AM
16th April 2013, at 1:29AM
Update at 18:31:55(BST): Prohost 17-19 are now back online and serving. Each server has had its RAM doubled form 20GB to 40GB. This, in addition to some granular tweaks to the overall system should provide a vastly better uptime level for all users.
Update at 18:22:20(BST): PH17-19 have now been upgraded and are booting back up.
Update at 18:02:37(BST): PH17-19 are now going offline.
We will be shortly beginning an emergency maintenance window on the prohost 17 through 19 servers.
We will be doubling the servers RAM, which should take 15-30 minutes in total. This should help ease the repeating load issues the servers have experienced lately.
Once the maintenance begins and ends, we will update this post.
11th April 2013, at 5:03PM
11th April 2013, at 5:31PM
Update at 18:41:16(BST): Update at 18:41:02(BST): This issue has been resolved. We have identified a compromised system that has now been isolated.
We have identified an issue affecting the general speed of our network.
We are attending to this matter now.
7th April 2013, at 6:22PM
7th April 2013, at 5:41PM
Update at 10:53:47(BST): The issue has now been resolved with this server.
This server is now back online. Apologies for any inconvenience caused.
Engineers have had to perform an emergency reboot on this server.
Apologies for any inconvenience caused.
5th April 2013, at 10:48AM
5th April 2013, at 9:53AM
Update at 13:50:55(GMT): The issue has now been resolved. Our phone lines are now open. Apologies for any inconvenience caused.
Update at 09:58:41(GMT): Our phone lines are currently closed as a member of our weekend staff is unable to come to work, due to the adverse weather. Apologies for any inconvenience caused.
23rd March 2013, at 9:58AM
23rd March 2013, at 1:50PM
Update at 15:31:45(GMT): The issue has now been resolved with this server.
The server is now back online. Apologies for any inconvenience caused.
Update at 14:55:21(GMT): We are having to perform an emergency reboot.
Apologies for any inconvenience caused.
15th March 2013, at 2:54PM
15th March 2013, at 3:31PM
Update at 10:17:55(GMT): This issue is now marked as resolved.
Update at 14:24:09(GMT): The issue has now been corrected. We identified a rogue server creating excess traffic that has now been corrected.
We do maintain automated systems to prevent these floods and in this instance the system seems to have failed to prevent the attack. We are presently reviewing the piece of hardware used to protect against these floods, in order to prevent a repeat.
Update at 14:13:59(GMT): We have identified the cause of the network load and are working to quickly correct this. All affected services will be accessible again momentarily.
Engineers are currently investigating initial reports of network traffic that is failing to resolve to a server. We'll post further information as soon as it becomes clear where the issue actually lies.
11th March 2013, at 1:46PM
12th March 2013, at 10:17AM
Update at 07:47:27(GMT): The issue has now been resolved with this server.Our engineers managed to isolate the issue causing this and resolve it around 0700 GMT. All mail services have been performing as normal since.
It has come to our attention there is currently issues connecting to our mail server database.
Our engineers have been notified and investigations will commence shortly.
This will be affecting all receiving of emails this includes webmail services as well as POP3 and IMAP connections for those on Professional and Business hosting plans.
34SP apologies for any inconvenience caused.
8th March 2013, at 6:23AM
8th March 2013, at 7:47AM
Update at 08:02:38(GMT): Though this was resolved some hours ago, we have now received confirmation that the maintenance window is clear and no further outage should be expected.
Update at 03:34:06(GMT): The outage has been caused due to unscheduled maintenance to one of our main provider's fibre networks.
We are still fine tuning the failover system and our engineers are working to bring it to full capacity to counter for the loss of service. Please bear with us whilst we continue to stabilise the framework to bring your service back to optimal performance and once again, please accept our sincerest apologies for any inconvenience that this may have caused.
Update at 02:44:48(GMT): Though this is mot now affecting all users, the initial rerouting of traffic seems not to have been sufficient. Engineers are actively working to resolve this as soon as possible. Updates will be provided as soon as we have any further news. Apologies for the continuing down time.
Update at 02:26:28(GMT): At approximately 1:30am, our network suffered from some instability. This caused service wide connection problems and our engineers were alerted to the issue.
The engineers pin pointed this issue as originating outside of our network and rerouted traffic accordingly as this did not failover automatically as intended. As of 2am the issue is considered largely resolved. Please accept our sincerest apologies for any inconvenience this may have caused.
5th March 2013, at 2:18AM
5th March 2013, at 8:02AM