Last modified September 30, 2023
This backup solution is based on the btrfs file system which intrinsically integrates features facilitating backup, the borg tool which allows you to very easily set up incremental backups and unison a graphical tool which manages backups in manual mode.
Following the comments on my first linuxfr journal and a second linuxfr journal I completed the backup device with:- a data backup and sharing solution based on a paid cloud kdrive from infomaniak with an automatic synchronization device and manual backups. This remote backup complements the local device and makes it even more robust to a disaster that would destroy both the server and the local backups. A third log on linuxfr details this specific point, I invite you to discover the comments that were made on these three logs which are quite instructive on the possible other backup solutions.
- a pure backup solution with encrypted stored data using rclone and Google Drive
These cloud-based solutions are developed in this other page .
Tools | Command line and embeddable in a bash script | MISTLETOE | built-in data encryption | Synchronization with some clouds (list in the link) | Useful links |
duplicity | Yes | No | Yes | Yes | link 1 , link 2 |
duplicate | Yes | Yes | Yes | Yes | link 1 , link 2 |
restic | Yes | No | Yes | Yes | link 1 , link 2 |
btrfs or B-tree File System (often pronounced ButterFS!) is a file system developed by Oracle, rather than paraphrasing what can be found on the net, I refer you to these pages for more details, in French:
in English:
The following command gives detailed information about the file system
btrfs filesystem usage /data
This is the result
Overall:
Device
size: 6.00TiB
Device
allocated: 2.04TiB
Device
unallocated: 3.95TiB
Device
missing: 0.00B
Used:
2.04TiB
Free
(estimated): 3.95TiB (min: 1.98TiB)
Data
ratio: 1.00
Metadata ratio: 2.00
Global
reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:2.04TiB,
Used:2.04TiB (99.98%)
/dev/sdb1
2.04TiB
Metadata,DUP:
Size:3.00GiB, Used:2.27GiB (75.65%)
/dev/sdb1
6.00GiB
System,DUP: Size:8.00MiB,
Used:240.00KiB (2.93%)
/dev/sdb1
16.00MiB
Unallocated:
/dev/sdb1
3.95TiB
So a total space of 6TB with 2TB used. To have information on the status of the space, we will type the command
btrfs device stats /data
here is the result
[/dev/sdb1].write_io_errs 0
[/dev/sdb1].read_io_errs 0
[/dev/sdb1].flush_io_errs
0
[/dev/sdb1].corruption_errs 0
[/dev/sdb1].generation_errs 0
btrfs allows some maintenance operations:
On this page you will find more information on all of these operations.
All these operations can be automated, the set of scripts provided by btrfsmaintenance does this very well. It is a kind of toolbox for maintaining a btrfs file system . The official site is https://github.com/kdave/btrfsmaintenance we retrieve the archive that we unzip by typing
unzip btrfsmaintenance-master.zip
this gives the btrfsmaintenance-master directory in which we type as root./dist-install.sh
this gives
Installation path: /etc/sysconfigwe will do the same for the other snapshots. Be careful the host file system, here in this case /run/mount2/ must also be formatted in btrfs . To list what this host file system contains we will type:
btrfs subvolume list /run/mount2
here is the result
ID 257 gen 2883 top level 5
path backup
ID 886 gen 690 top level
257 path backup/2021-01-10-snapshot-bureautique
ID 890 gen 703 top level
257 path backup/2021-01-10-snapshot-homepage
ID 898 gen 731 top level
257 path backup/2021-01-10-snapshot-musiques
ID 921 gen 1575 top level
257 path backup/2021-01-10-snapshot-photos
To delete a snapshot or generally a sub-volume, you do not need to use a classic rm , you will need to type
Snapper is a tool that will help you manage snapshots even if it remains a command line tool. I installed it on my Mageia simply with the urpmi command but for all intents and purposes the official site is http://snapper.io.
To illustrate how it works, I go back to my /data directory and create a configuration with the command
snapper -c data create-config /data
it sends me back
Failed to create configuration (creating btrfs subvolume .snapshots failed since it already exists).
Obviously I had already previously created a .snapshots sub-volume which you can clearly see when I type
btrfs subvolume list /data
ID 2358 gen 9137 top
level 5 path office
ID 2359 gen 9143 top
level 5 path homepage
ID 2360 gen 6631 top
level 5 path music
ID 2361 gen 9140 top
level 5 path photos
ID 2362 gen 9135 top
level 5 path videos
ID 2368 gen 6151 top
level 5 path .snapshots/2021-01-10-snapshot-bureautique
ID 2369 gen 6152 top
level 5 path .snapshots/2021-01-10-snapshot-homepage
ID 2370 gen 6153 top
level 5 path .snapshots/2021-01-10-snapshot-musiques
ID 2371 gen 6154 top
level 5 path .snapshots/2021-01-10-snapshot-photos
ID 2372 gen 6155 top
level 5 path .snapshots/2021-01-10-snapshot-videos
No problem, we will rename my subvolume /data/.snapshots which will be recreated by snapper
cd /data
mv .snapshots/
.instantanes
Borg is a particularly powerful and easy to implement backup tool, the official website is https://borgbackup.readthedocs.io/en/stable/ and you can find more information here or there. On my Mageia, for once, I simply installed the package provided by the distribution by typing
urpmi borgbackup
Another alternative is to use python and pip to install it, before going any further you will probably have to install the lib64python3-devel and lib64xxhash-devel packages then as root you type
pip install -U pip
setuptools wheel
pip install pkgconfig
pip install borgbackup
here is the result
Collecting borgbackup
Downloading borgbackup-1.2.1.tar.gz (4.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.0/4.0 MB 1.3 MB/s eta
0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: packaging in
./borg-env/lib/python3.8/site-packages (from borgbackup) (21.3)
Collecting msgpack!=1.0.1,<=1.0.4,>=0.5.6
Downloading
msgpack-1.0.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
(322 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 322.5/322.5 kB 785.1
kB/s eta 0:00:00
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in
./borg-env/lib/python3.8/site-packages (from
packaging->borgbackup) (3.0.9)
Building wheels for collected packages: borgbackup
Building wheel for borgbackup (pyproject.toml) ... done
Created wheel for borgbackup:
filename=borgbackup-1.2.1-cp38-cp38-linux_x86_64.whl
size=3104598
sha256=6eb754bb4dbda17cbb090866401ae37cdf57675e533320e0816bbf3908e8583b
Stored in directory:
/home/olivier/.cache/pip/wheels/d6/35/34/951fead78c86a6b60981ed233d1aec7f66f23951fc13834b02
Successfully built borgbackup
Installing collected packages: msgpack, borgbackup
Successfully installed borgbackup-1.2.1 msgpack-1.0.4
it's all good!
The first thing to do is create a backup repository
borg init --encryption=authenticated /media/backups/
here is the result
Enter new passphrase:
Enter same passphrase again:
Do you want your passphrase to be displayed for verification?
[yN]: y
Your passphrase (between double-quotes): "XXXX"
Make sure the passphrase displayed above is exactly what you
wanted.
By default repositories initialized with this version will
produce security
errors if written to with an older version (up to and including
Borg 1.0.8).
If you want to use these older versions, you can disable the
check by running:
borg upgrade --disable-tam /media/sauvegardes
See
https://borgbackup.readthedocs.io/en/stable/changes.html#pre-1-0-9-manifest-spoofing-vulnerability
for details about the security implications.
IMPORTANT: you will need both KEY AND PASSPHRASE to access this
repo!
Use "borg key export" to export the key, optionally in printable
format.
Write down the passphrase. Store both at safe place(s).
It will be necessary to choose an encryption method, the authenticated mode does not use an encryption key, but an authentication method with a HMAC-SHA256 hash to be precise.
To create an archive containing the /data directory , type
borg create /media/saves::2021-01-12 /data
to list the backups in the repository, type
borg list /media/backups
You will need to enter the password
Enter passphrase for key /media/backups:
and this is what it can give
after a certain time
with the script detailed further
down on this page
2021-06-30 Wed, 2021-06-30
04:46:41
[570c42c2f51e83b69f7845d50b928ac42581745b587c2b34c16daefd2b2a44c6]
2021-07-31 Sat, 2021-07-31
04:41:52
[61196c2af59b05045a94ca88b213bc37ff640295ac09f72c54e7dcdc02b2eea2]
2021-08-29 Sun, 2021-08-29
04:42:29
[4eb7f68a996b5ea9f38267cf9d205c0572887f9695f3c5fe8b7719f5a23988a4]
2021-09-30 Thu, 2021-09-30
04:28:54
[739a9068bee4d61b5d3e19e5065ba7c14ca16c2097656d6006e6211967651f76]
2021-10-31 Sun, 2021-10-31
04:49:47
[d95f0a77bd03897a17bd88b71e09e5f458fc94745d3077d4a350fdee3a231c7d]
2021-11-21 Sun, 2021-11-21
04:44:25
[b6c889c3fcd7dc1fd5dbcf5aa7a98487cd869ff389189680f9e8ec250141de46]
2021-11-28 Sun, 2021-11-28
04:44:20
[8fcfc9aacb7740eb19091d582a3660ebeb7853e3371118c9fce78095d522ec47]
2021-11-30 Tue, 2021-11-30
04:52:15
[7c39c801b2bc4c66076d92d124adcc36b4cf77a932a106cb3e37b6d8b374a133]
2021-12-05 Sun, 2021-12-05
04:11:58
[7f5d1ac609cc48dd2591aaff5ee38638746ccc90cd25d9aae9c4e4fe0811d1c4]
2021-12-12 Sun, 2021-12-12
04:24:59
[f70f132e630eada058f392297396b4dff4030e0a90dd32ef75103ee29f2ac821]
2021-12-15 Wed, 2021-12-15
04:26:36
[72680bbdcbe24c53ff5021f094c8f3092003409d982b468adca06714ed240767]
2021-12-16 Thu, 2021-12-16
04:34:44
[38d5fec4a3cf5eaa288b99ec0b5569f638875f9eb678d2cb3af6b3a74851b7b9]
2021-12-17 Fri, 2021-12-17
04:24:19
[535a5c89e9b395f9b49b55d15563f9cd34e0269a46854e3aff037709f8372e38]
2021-12-18 Sat, 2021-12-18
04:26:07
[c4ef77daa969f45453afd5dfb0d7232ffaad6de0afcff73beb19349aeafcb314]
2021-12-19 Sun, 2021-12-19
04:24:52
[6fa254f97fc3a00fc9ff33e6e14800b08e2654509bfd56cb1ed7837ea8cc889b]
2021-12-20 Mon, 2021-12-20
04:34:39
[1cfadcf67a366703b07f36395dc048b806609d086ee149e1ff995358cc34debd]
2021-12-21 Tue, 2021-12-21
04:33:02
[fbeca3841f3a9340222927c769106111bf5f982324de405f7008a2ad930db6e8]
The current week we find one backup per day then it spaces out, the current month it is one backup per week, then the previous months one backup per month. In this case I go back 6 months.
To view the contents of the
archive
borg list
/media/backups::2021-01-09
Enter passphrase for key /media/backups:
and the content is displayed
drwxr-xr-x root root 0 Wed,
2021-01-06 17:16:14 etc
drwxr-xr-x root root 0
Sun, 2020-11-22 11:24:56 etc/profile.d
-rw-r--r-- root root 143
Fri, 2019-11-01 10:28:30 etc/profile.d/30python2.csh
-rwxr-xr-x root root 1552
Wed, 2018-09-26 04:45:42
etc/profile.d/40configure_keyboard.sh
-rwxr-xr-x root root 243
Mon, 2020-08-24 18:06:27 etc/profile.d/60qt5.csh
-rwxr-xr-x root root 444
Mon, 2020-08-24 18:06:27 etc/profile.d/60qt5.sh
-rw-r--r-- root root 1144
Sat, 2020-06-06 08:01:08 etc/profile.d/01msec.csh
-rw-r--r-- root root 561
Sat, 2020-06-06 08:01:08 etc/profile.d/01msec.sh
(...)
To restore to the current directory, type
borg extract /media/backups::2021-01-09
For a restore to the /data/temp directory
borg extract /media/saves::2021-01-09 /data/temp
or other solution, we mount the archive in a temporary directory
borg mount /media/backups::2021-01-09 /media/borg
we recover what we want then we unmount the archive
borg umount /media/borg
To delete an archive
borg delete /media/backups::2021-01-09
Note that if you have the following error when mounting the archive
borg mount not available:
no FUSE support, BORG_FUSE_IMPL=pyfuse3,llfuse
you will need to think about installing the python3-llfuse package
We will start by entering the
authentication password for the borg repository in
a passphrase file under the /root/.borg directory
mkdir /root/.borg
/root/.borg/passphrase
we will give write permissions only to root to this file
chmod 400 /root/.borg/passphrase
Now I created a file /etc/cron.daily/sauvegarde to be launched daily at 4am on my Mageia. It first contains a battery of disk integrity tests and only launches the script /usr/sbin/borg-sauve if the tests are successful.
#!/bin/bash
#running a hard raid
integrity test command
/usr/local/linux/system/hwraid-master/wrapper-scripts/megaclisas-status
> /tmp/megastatus 2>&1
# path of the backup
repository which is on an external disk
remote1="/media/backups"
# test to verify the
presence of the backup disk
if [ ! -e "$distant1" ]
then
# the
disk is not mounted, I just send the raid status email and I
stop the script
cat
/tmp/megastatus | mail -s "State raid" olivier
exit
fi #
test the health status of
the external hard drive
/usr/sbin/smartctl -a
/dev/sdc >> /tmp/megastatus 2>&1
# send the mail of the
status of the mana disks
cat /tmp/megastatus | mail
-s "State disk mana" olivier
# test the status of the
raid, in Degraded mode I stop everything
raid=$(MegaCli64 -LDInfo
-L1 -a0 | grep State)
if echo $raid | grep
Degraded >/dev/null 2>&1
then
exit
fi
#test external hard drive
status
ddur=$(/usr/sbin/smartctl
-A /dev/sdc)
if echo $ddur | grep
FAILING_NOW >/dev/null 2>&1
then
exit
fi
/usr/sbin/borg-save
We come to the script /usr/sbin/borg-sauve, the references that helped me to write it are:
https://code.crapouillou.net/snippets/1
https://www.geek-director-technique.com/2017/07/17/usage-de-mysqldump
https://borgbackup.readthedocs.io/en/stable/quickstart.html#automating-backups
In addition to making regular backups to the repository of particular directories, the script also backs up MySQL and LDAP databases and sends emails to report the backup. Here is the content of the script
#!/bin/bash
# Borg based backup script
# Backups are encrypted
set -e
# date function
ts_log() {
echo
`date '+%Y-%m-%d %H:%m:%S'` $1 >> ${LOG_PATH_TMP}
}
# binary path definition
BORG=/usr/bin/borg
MYSQLDUMP=/usr/local/mysql/bin/mysqldump
MYSQL=/usr/local/mysql/bin/mysql
SLAPCAT=/usr/local/sbin/slapcat
# variable definition
BACKUP_DATE=`date
+%Y-%m-%d`
LOG_PATH_TMP=/var/log/backup/borg-backup-tmp.log
LOG_PATH=/var/log/backup/borg-backup.log
export
BORG_PASSPHRASE="`cat /root/.borg/passphrase`"
BORG_REPOSITORY=/media/backups
BORG_ARCHIVE=${BORG_REPOSITORY}::${BACKUP_DATE}
#definition of variables
for MySQL and LDAP databases
# we will define the MySQL
password in this file
MYSQL_ROOT_PASS=`cat
/root/.mysql/passphrase`
DATABASES=`MYSQL_PWD=$MYSQL_ROOT_PASS $MYSQL -u root -e
"SHOW DATABASES;" | tr -d "| " | grep -v -e Database -e
_schema -e mysqli -e sys`
LDAP_TMP_DUMP_FILE=/var/log/sauvegarde/ldap/ldab-db.ldif
# here we go, we start
dating the backup
rm -f $LOG_PATH_TMP
ts_log "Starting new
backup ${BACKUP_DATE}..."
# we copy the MySQL
databases
ts_log 'Copying MySQL
databases...'
for DB_NAME in $DATABASES;
do
MYSQL_PWD=$MYSQL_ROOT_PASS $MYSQLDUMP -u root
--single-transaction --skip-lock-tables $DB_NAME >
/var/log/backup/mysql/$DB_NAME.sql
done
# copy the LDAP database
ts_log 'Copying the LDAP
database...'
$SLAPCAT -l
$LDAP_TMP_DUMP_FILE
# create the borg archives
# mentioning the
directories to copy
ts_log "Creating the
archive ${BORG_ARCHIVE}"
$BORG create \
-v --stats
--compression lzma,9 \
$BORG_ARCHIVE \
/etc
/usr/local/apache2 /usr/local/etc /data /home
/var/log/backup/mysql \
$LDAP_TMP_DUMP_FILE
\
>>
${LOG_PATH_TMP} 2>&1
# Cleaning old backups
# We keep
# - one archive per day
for the last 7 days,
# - one archive per week
for the last 4 weeks,
# - one archive per month
for the last 6 months.
ts_log "Rotating old
backups"
$BORG prune -v
$BORG_REPOSITORY \
--keep-daily=7 \
--keep-weekly=4 \
--keep-monthly=6 \
>>
${LOG_PATH_TMP} 2>&1
cat $LOG_PATH_TMP | mail
-s "Backup" olivier
cat $LOG_PATH_TMP >>
${LOG_PATH}
Unison is a bidirectional synchronization software, that is to say that it is up to you to estimate which instance of a file is the right one between the two instances, it allows you to have total control over the copy and it allows you to avoid many inconveniences. Downside, it is not automatic, it is manual and it can take time, but it can be coupled with automatic solutions.
Unison is on any modern distribution. When creating a synchronization profile, you will have to choose its connection type, for my part it remains local mounting.
We then choose the directories to synchronize, if one of the directories is empty, it will make a copy.
Once the sync profiles have been created, you will only need to perform manual syncs on a regular basis.
[ Back to FUNIX home page ] | [ Back to top of page ] |