Downtime procedures are relatively easy when just shutting it down for a long window is acceptable. More or less what you are doing.
Prior to downtime, while its still running, consider a reminder on the site that it has maintenance. Organizations vary in what they expect for this, some systems have a banner that its going down tonight, some have a separate maintenance calendar.
As the web server is being shut down, there is not much of an opportunity to show a maintenance page. Probably not bother for now. Maybe in the future, this will be behind a load balancer, and that can show a static down page while connections are being drained.
Where available, use the same service manager to start and stop all applications. Using one tool allows consistent reporting of status, dependencies, handling cleanup of failed stops, and other features. Confusingly, not every distro likes using the apachectl program, but that's fine because how httpd stops via signals is well documented.
Stop web and then database server. Rely on the service manager, in this case systemd units, to provide relatively graceful shutdowns, similar to apachectl graceful-stop
. Read the units for the details of how they are implemented.
# Stop web first to stop application from accessing data
systemctl stop apache2
# A long wait here is likely not necessary
# the service manager is shutting down processes, and the database is going away soon as well
# Stop one at a time to ensure done in this order
systemctl stop mysqld
# Wait a bit for database files to be closed, just in case
# FIXME this delay is arbitrary
sleep 10
If you take backups by copying the live database files, take care that no processes still access them. Copying files still being written to has a risk of the copies being corrupt and a bad backup. During normal situations, a graceful stop will quickly prevent new requests. And under light load, clean up and shut down work finishes fast. The tricky part is ensuring this is true even under extraordinary circumstances.
A careful backup could check that nothing has the files open, such as fuser
in a script. Possibly also sending a signal to any remaining processes, like a SIGTERM.
Requiring downtime, plus a little paranoia about ensuring the files are closed, makes this file copy backup method not as easy as it might seem. Consider alternative online backup methods, like using the database's built in methods to hot copy their data or dump exports to files.
For completeness, another option is consistent, online, storage level snapshots and copy those files. Also avoids the downtime, but can be even trickier as the database was never shut down. Doing this safely means pausing writes, and understanding how databases do startup recovery.
Test restores occasionally. Actually restore the data to some staging database in a test environment, start it up, and spot check data is correct.