Some systems are experiencing issues

Stickied Incidents

13th April 2024

fra01 Impact on several VMs

We are investigating the issue and are working on a fix.

  • All services are fully operational again, we are going to watch the situation and update this incident as Fixed with more information to come soon.

  • Past Incidents

    13th April 2024

    fra01 Impact on several VMs

    We are investigating the issue and are working on a fix.

  • All services are fully operational again, we are going to watch the situation and update this incident as Fixed with more information to come soon.

  • 16th February 2024

    CEPH cluster CEPH Maintenance

    The storage environment is being upgraded unscheduled to resolve performance optimization and usage improvement issues. While draining and migrating workload clients have experienced performance degradation.

    The maintenance time frame has been increased to minimize the impact.

  • The issue has been resolved, customers may experience improved IOPS and Bandwith Metrics by up to 201%

  • 2nd December 2023

    Backup server Backup RAID Rebuild

    Our backup server's RAID is currently undergoing enhancements to increase performance and accommodate increased storage demands. We have expanded the array by adding two 18TB datacenter SAS Drives.

    All existing backups can still be restored anytime.

    During this process, VM backups are temporarily paused. We appreciate your understanding as we work to complete the upgrades swiftly.

  • Unfortunately all data stored using the backup service is lost. The system has been redeployed and is back in operation since 03.02.2024 15:08:26 UTC

    To see the full incident report see the below Link: https://nextcloud.prohosting24.de/s/PADsmGjTtA3xwS8

  • At the time being Backups also can not be restored anymore. Unfortunately the array went into a failed state and is going to recover first after the reshape has finished.

    Until then it is not safe to use the array, therefore we have deactivated the backup service until further notice. The current reshape process is estimated to finish 31.01.2024 17:42:38 UTC.

    So far no data has been corrupted or lost.

  • The reshape in the course of the added hard disks creates a load that impairs the recovery of the failed hard disk. We do not accept the risk of possible data loss and therefore pause the backups again for at least 24 hours.

    The backups of the last few days can still be restored as usual from the customer area.

  • The backup service is currently undergoing a recovery process in response to a failed drive in the RAID array, leading to performance issues. Concurrently, we are expanding the array by incorporating two additional 18TB enterprise drives.

  • The issue didn't occur so far anymore.

  • Backups can be written again, we are still watching the situation.

  • Unfortunately we have to continue our maintenance on the Backup Service by extending it again by two 18TB drives. We will keep you updated, we have been able to store new Backups of all VMs since our last maintenance.

    It is again still possible to restore your existing backups anytime.

  • Backups can be written again.

  • 29th November 2023

    pve2 Node Failures

    One of our hypervisor containing VMs and Storage Cluster services has failed. We are evacuating the host.

  • We have fixed the system succesfully.

  • 23rd November 2023

    Network Internal Gateway down

    Unfortunately, the internal gateway was not accessible due to an incorrect VRRP configuration, which had an impact on internal services and our homepage.

    12th October 2023

    CEPH cluster Degraded NVMe CEPH performance

    We are investigating ongoing performance issues. Some measures have already been implemented to decrease the performance impact.

  • We have identified the cause again and taken further measures to remedy the performance degradation. The performance degradation has been resolved.

  • We are investigating new performance issues and already implemented further measure to increase the client performance.

  • We have successfully resolved the cluster performance degradation and are actively implementing enhanced monitoring for proactive issue detection for ongoing operation improvement.

  • 8th May 2023

    ph24-12 ph24-12 down

    The hypervisor seems to be offline

  • A bug in the running kernel version caused the system to kernel panic, we have patched the kernel and rebooted the system.

  • 17th December 2022

    Crashed Hypervisor ph24-11

    The hypervisor ph24-11 is not responding due to IO related issues with the hosts software raid.

  • There have been fan control related issues which lead to a crashing system. We have adjusted the fan control configuration to suite our needs.

  • The system has been evacuated and we are conducting further investigations to ensure that this problem does not occur again.

  • 16th November 2022

    Crashing VMs

    We are currently investigating an issue where several VMs have crashed.

  • The issue has been resolved

  • 22nd October 2022

    ph24-9 Malfunction of hypervisor ph24-9

    We are working on restoring the functionality of the affected guests and of the hypervisor.

  • The Problem has been resolved

  • 15th August 2022

    prohosting24.de Backend errors

    We are currently implementing a fix which should make the web app operational again.

  • All problems have been resolved now.

  • We stumbled across some issues with restored tables, those tables are going to be restored again manually.

  • The Problem has been resolved

  • 5th April 2022

    ns2.prohosting24.de / ns2.prohosting24.eu Nameserver 2 shutdown

    We performed a controlled shutdown of the secondary name server ns2 due to a detected inconsistency in replicated DNS zones. We are currently investigating why axfr zone transfer is not working as expected.

    All dns zones are still resolveable through the primary nameserver ns1.

  • We have redeployed the secondary name server. All DNS zones are now available and consistent on both nameservers.

  • 4th April 2022

    Network 5.181.151.0/24 offline

    The provider of the subnet 5.181.151.0/24 has interrupted the routes.

    I have assigned a new IP address to the affected servers. If the server is not accessible via this address after a reboot, the address may have to be adjusted manually using the noVNC console.

  • All customers have been migrated to other IP-Address spaces.

  • 3rd April 2022

    Backup server Proxmox backup server peformance issues

    Current migration of data between raid arrays on the backup server provided by our housing provider is causing performance issues that result in the inability to create or restore backups at this time. After the data migration is complete, the backup creation and restoration tasks are performed much faster than before.

  • The backup server is fully operational.

  • The Backupserver is now beeing migrated back to a raid 10 array.

  • The migration has finished successfully.

  • 2nd April 2022

    CEPH cluster Performance issues due to maintenance

    Performance issues detected

  • Migration of data has finished successfully. Everything is up and running again.

  • We have temporarily added a new object store daemon to ph24-8 and managed to get the crashed ph24-8/9 object store daemon working again. The ph24-5 hypervisor was successfully taken out of service.

    Migration of data that is still in progress may affect the performance of the Ceph cluster.

  • The ph24-8/9 object store daemon crashed while we were performing a data migration to other object stores. This failure resulted in unavailability of 25 pg's. The data was not lost, but was unavailable for a few minutes.

  • 27th March 2022

    Network Network unavailable for subnet 5.181.151.0/24

    All services from the network 5.181.151.0/24 are unreachable

  • The owner of the IPv4 resources 5.181.151.0/24 removed the route object for the subnet because of an abuse report regarding fraudulent content being hosted at web.prohosting24.de.

    The fraudulent content has been removed and the connectivity was restored by recreating the old route object.