Feature #9401

Evaluate peak working directory size during a full test suite run

Added by intrigeri 2015-05-14 09:28:44 . Updated 2015-06-03 16:35:56 .

Status:
Resolved
Priority:
Elevated
Assignee:
Category:
Test suite
Target version:
Start date:
2015-05-14
Due date:
% Done:

100%

Feature Branch:
Type of work:
Test
Blueprint:

Starter:
Affected tool:
Deliverable for:

Description

… with --keep-snapshots and with the upcoming (Feature #6094, Feature #8008, test/wip-improved-snapshots Git branch) increased snapshots usage.


Subtasks


History

#1 Updated by intrigeri 2015-05-16 08:01:15

On current stable (that is without the upcoming new snapshots handling), with --keep-snapshots, disk usage for a full run peaks at 20GB.

#2 Updated by intrigeri 2015-05-30 19:25:12

  • Priority changed from Normal to Elevated

I’ve completed the research for all other bits of Feature #9400, so this ticket is now blocking hardware purchase (and then in turn, Feature #6186, since I’ve noticed on my way that we clearly don’t have enough available storage to handle all those autobuilt ISOs) => bumping priority.

anonym, any ETA? I can probably take care of it if your plate is too full already and this one is too much.

#3 Updated by anonym 2015-06-02 11:22:51

  • Description updated

#4 Updated by anonym 2015-06-03 14:09:01

  • Assignee changed from anonym to intrigeri
  • QA Check set to Info Needed

Here’s the results when running the full test suite on Tails 1.4 (and 1.3.2 as the old ISO):

When running in the stable branch at commit 621fa0a: 8134636 bytes. In other words, it’s completely dominated by the 8 GiB memory dump in erase_memory.feature.

When running in the test/wip-improved-snapshots branch at commit 565309f: 14812332 bytes. All snapshots are already generated when we run erase_memory.feature, and that’s when we reach this peak. Hence, if we were to do .feature file ordering, and run erase_memory.feature (which doesn’t use any snapshots) first, we would lower the peak disk usage to the exact same level as was measured for stable above. Specifically, the calculation for the peak will then be max(SIZE_OF_ALL_SNAPSHOTS, 8 GiB). Currently SIZE_OF_ALL_SNAPSHOTS is approximately 14812332 bytes - 8 GiB = 7 GiB, so where pretty close to the point where the size of the snapshots will start to dominate the peak, i.e. if we want to add more snapshots, we’ll be worse of with this branch.

However, for almost all scenarios where we currently do not have an optimal snapshot (i.e. some extra steps that take time have to be made for each scenario in a feature), I think we could add optimal snapshots but flagged as “transient” snapshots that will be deleted after the feature is done. In all such instances I can see, they will be RAM-only snapshots, which are ~1 GiB, so we’ll still be in the 8 GiB range.

Above I say “almost all”, because I can imagine that we may want two more snapshots for when we have logged in with a USB installation, and with persistence (currently all such snapshots are before we login). That will require perhaps ~2 GiB more space (i.e. 1 GiB for the RAM snapshot for each of the two). Then we’d be in the 10 GiB range.

I could be wrong, but I don’t think we’d need much more than 10 GiB for the foreseeable future, if we do everything I suggest above, but adding a few GiBs of headroom would be nice, especially since disk space is pretty cheap, and we won’t have that many tester VMs on lizard.

I’m not closing the ticket, as you suggested, in case something isn’t clear.

#5 Updated by intrigeri 2015-06-03 14:42:28

  • Status changed from Confirmed to Resolved
  • Assignee deleted (intrigeri)
  • % Done changed from 0 to 100
  • QA Check deleted (Info Needed)

Thanks!

#6 Updated by anonym 2015-06-03 16:35:56

anonym wrote:
> When running in the stable branch at commit 621fa0a: 8134636 bytes. In other words, it’s completely dominated by the 8 GiB memory dump in erase_memory.feature.

That results was without --keep-snapshots. Since you asked for it originally, FWIW, the results with --keep-snapshots is 20769316 bytes.