Bug #6228

Persistent partition needs recovery after clean shutdown

Added by sjmurdoch 2013-08-09 05:00:09 . Updated 2013-09-19 06:43:19 .

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Persistence
Target version:
Start date:
2013-08-09
Due date:
% Done:

0%

Feature Branch:
bugfix/unmount-persistent-volume-on-shutdown
Type of work:
Code
Blueprint:

Starter:
0
Affected tool:
Deliverable for:

Description

After performing a clean shutdown by choosing “Shutdown immediately”, the persistent ext4 partition needs to perform recovery. As a result, if the USB drive is set read-only thought its hardware switch, the tails boot will hang after the persistent disk encryption password is entered. This is because if ext4 detects that it needs to recover by is read only, the mount will fail. If the disk is not read-only there is a dmesg log entry saying EXT4 recovery complete.

I was able to fix this by manually unmounting all the persistent partitions, and removing the cryptsetup mapping, before shutting down.

This is using Tails 0.19 with persistence (apt-cache, apt-lists, persistent folder and dotfiles)


Subtasks


History

#1 Updated by intrigeri 2013-08-09 06:00:40

sjmurdoch will try and reproduce it in the next few days, save fuser -m /live/persistence/*_unlocked and the results each time, so that we can see if there’s a pattern.

#2 Updated by sjmurdoch 2013-08-09 06:01:50

On reboot (with the h/w switch set to read-write but the greeter read-only option enabled) the following appears in /var/log/syslog

Aug  9 12:54:56 localhost kernel: [   58.531199] EXT4-fs (dm-0): INFO: recovery required on readonly filesystem
Aug  9 12:54:56 localhost kernel: [   58.531205] EXT4-fs (dm-0): write access will be enabled during recovery
Aug  9 12:54:56 localhost kernel: [   58.589742] EXT4-fs (dm-0): recovery complete
Aug  9 12:54:56 localhost kernel: [   58.600295] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)

#3 Updated by sjmurdoch 2013-08-09 07:42:29

When the problem occurs, the only user of the partition (according to fuser) is that xmpp-client is being executed from it. xmpp-client appears to terminate when sent signal 15, so I would have not thought it would cause a problem for an unmount.

It would be interesting to know if other people have this problem. It could be that other people have this issue but do not notice because EXT4 will automatically recover unless the USB drive has a hardware read-only switch. Look in /var/log/syslog for “EXT4” to see if “recovery complete” is mentioned.

#4 Updated by sjmurdoch 2013-08-09 08:49:12

I just ran this test without starting xmpp-client (or starting any other applications explicitly) and the problem remains. So maybe it is not to do with xmpp-client and rather something else which is causing either the disk to not be unmounted properly or other changes happening after it was unmounted.

#5 Updated by intrigeri 2013-08-09 12:10:53

> I just ran this test without starting xmpp-client (or starting any other applications
> explicitly) and the problem remains. So maybe it is not to do with xmpp-client and
> rather something else which is causing either the disk to not be unmounted properly
> or other changes happening after it was unmounted.

May you please retry with Tails 0.20 and confirm it’s affected too?

#6 Updated by sjmurdoch 2013-08-09 13:01:25

> May you please retry with Tails 0.20 and confirm it’s affected too?

Sure, but I won’t be able to do so for a few days.

#7 Updated by intrigeri 2013-08-10 01:07:49

  • Assignee set to sjmurdoch

#8 Updated by sjmurdoch 2013-08-13 12:44:58

intrigeri wrote:
> May you please retry with Tails 0.20 and confirm it’s affected too?

I’ve now checked and the behaviour appears unchanged in Tails 0.20.

Before shutdown there are no users of /live/persistence/*_unlocked reported by fuser.

I investigated further by editing /etc/rc0.d/K11unmountroot to start a shell after remounting / read-only

The screenshot here shows the status: http://www.cl.cam.ac.uk/~sjm217/volatile/tails-shutdown.jpg

As you can see, the persistence partitions have not been unmounted, which would explain why the filesystems require repair on boot. At what stage were they supposed to have been umounted.

#9 Updated by sjmurdoch 2013-08-15 10:25:09

Here is the script I run before shutdown which seems to work-around the issue:

#!/bin/bash

for fs in `mount | awk '{print $1}' | grep --color=never /live/persistence/.*_unlocked`; do
  umount "$fs"
done

for fs in `mount | awk '{print $1}' | grep --color=never /dev/mapper/.*_unlocked`; do
  umount "$fs"
done

cryptsetup luksClose $(echo $(dmsetup status --target crypt) | cut -d: -f1)

#10 Updated by intrigeri 2013-08-19 02:48:59

  • Status changed from New to In Progress
  • Assignee changed from sjmurdoch to intrigeri

I can confirm this, and I think I’ve tracked it down to a bug in live-config. I’m working on a fix that I will submit upstream.

sjmurdoch: thanks for the detailed bug report and analysis!

#11 Updated by intrigeri 2013-08-22 06:23:38

  • Category set to Persistence
  • Assignee changed from intrigeri to anonym
  • QA Check set to Ready for QA
  • Feature Branch set to bugfix/unmount-persistent-volume-on-shutdown

#12 Updated by intrigeri 2013-09-03 11:33:15

  • Status changed from In Progress to Fix committed
  • Assignee deleted (anonym)
  • QA Check changed from Ready for QA to Pass

#13 Updated by intrigeri 2013-09-19 06:43:19

  • Status changed from Fix committed to Resolved

Fixed in 0.20.1.