admin:linux:btrfs

btrfs (B-Tree file system)

mkfs.btrfs -L LABEL -d raid1 -m raid1  /dev/sdd1 /dev/sde1
  1. ! MAKE BACKUPS !!!
  2. gdisk /dev/sdx: one GPT partition over the whole disk
  3. mkfs.btrfs /dev/sdx1
  4. mount /dev/sdx1 /mnt/data
  5. rsync -a –info=progress2 /mnt/olddata /mnt/data
  6. gdisk /dev/sdx: one GPT partition spanning the whole disk
  7. btrfs device add /dev/sdx1 /mnt/data
  8. btrfs balance start -dconvert=raid0 -mconvert=raid1 /mnt/data
  9. done.

A write hole in a RAID1 occurs when a file is written while the device is disconnected, e.g. due to a power failure. If the metadata hasn't been written properly, the files may be lost even though the data seems to have been saved. The developers of Btrfs warn that their RAID5/6 code might be more susceptible to that issue and discourage the use of RAID5/6, even though some improvements have been made over the years.

This is hard to reproduce2 and can be partially mitigated by setting the metadata strategy to raid1c3 or higher.

  1. ! MAKE BACKUPS !!! gdisk /dev/sdx: b → sdx-backup
  2. note output from blkid /dev/sdx1
  3. btrfs fi res
  4. note devid 1 size from btrfs fi sh –raw /mnt/point output
  5. note the Partition unique GUID and Logical sector size from gdisk -l /dev/sdx1 output
  6. gdisk /dev/sdx: d → n → 1 → 2048 → 2048+size/Logical sector size → 8300 → x → c → PARTUUID → w
  7. btrfs check /dev/sdx1
  8. mount, btrfs scrub start /mnt/point (check with btrfs scrub status /mnt/point

There's a driver for Windows called WinBtrfs.


[1] usually in RAID5, RAID6 or in rare cases even RAID1
[2] several people, e.g. u/Rohrschacht (2020) on reddit, have been trying to reproduce this with limited success
  • Last modified: 2022-09-26 13:40