Feature #6876
Have the incremental upgrade process use less RAM
100%
Description
As part of this work, we should update some hardcoded constants:
memory_needed
andspace_needed
inTails::IUK::Frontend
(the very goal of this work is to decrease these values)MIN_AVAILABLE_MEMORY
inconfig/chroot_local-includes/usr/local/bin/tails-upgrade-frontend-wrapper
(sinceFeature #17152was done, likely the Upgrader can work with less free memory)
Files
Related issues
Related to Tails - |
Rejected | 2014-03-07 | |
Related to Tails - |
Rejected | 2015-08-28 | |
Related to Tails - |
Resolved | ||
Related to Tails - |
Resolved | 2014-12-18 | |
Related to Tails - |
Resolved | 2018-09-14 | |
Related to Tails - |
Rejected | 2018-09-30 | |
Related to Tails - Feature #5502: Next time we bump RAM requirements: notify user at runtime if RAM requirements are not met | Confirmed | ||
Blocks Tails - Feature #16209: Core work: Foundations Team | Confirmed |
History
#1 Updated by intrigeri 2014-03-07 12:09:48
Once Tails is based on Wheezy, porting the whole things from Moose to Moo might help, but perhaps there are lower-hanging fruits to pick first.
#2 Updated by intrigeri 2015-08-15 08:34:04
Memory::Usage
is now in sid, which should help.
#3 Updated by intrigeri 2015-08-28 10:52:02
- related to
Feature #10115: Add a splash screen to Tails persistence assistant added
#4 Updated by intrigeri 2017-09-12 21:25:21
Idea: instead of storing files to deploy inside tarballs inside iuk tarball, we could store files to deploy in a SquashFS inside a SquashFS IUK, so we can mount a couple SquashFS’es instead of extracting archives recursively. Incidentally this would drop the “needs N times the size of an IUK to install it” requirement. Note that we need 2 levels of bundling as the top level needs control files for the update.
#5 Updated by anonym 2018-02-04 11:53:57
intrigeri wrote:
> Idea: instead of storing files to deploy inside tarballs inside iuk tarball, we could store files to deploy in a SquashFS inside a SquashFS IUK, so we can mount a couple SquashFS’es instead of extracting archives recursively.
I love it! So we could use any format we can mount (e.g. raw disk image containing a FAT fs) but the compression of first layer would be nice, so SquashFS seems like a fine choice.
> Note that we need 2 levels of bundling as the top level needs control files for the update.
Let’s just store the things we would put in the second layer bundle in subdirectories (e.g. ./boot.tar.bz2
→ ./boot/
; ./system.tar.bz2
→ ./system/
). It’s simpler and should result in better (or at least no worse than equal) compression.
#6 Updated by anonym 2018-02-04 11:55:47
- Priority changed from Low to Normal
Without this, the increased memory requirements of Feature #11131 will result in inability to upgrade with 2 GB of RAM.
#7 Updated by anonym 2018-02-04 11:56:05
- blocks
Feature #11131: Endless automatic upgrades added
#8 Updated by anonym 2018-02-04 15:04:31
- Status changed from Confirmed to In Progress
Applied in changeset commit:10950ddfe3611a0a8d89fd63fb68bbc9c63ee3a2.
#9 Updated by intrigeri 2018-02-06 15:29:32
- blocked by deleted (
)Feature #11131: Endless automatic upgrades
#10 Updated by intrigeri 2018-02-06 15:30:06
- related to
Feature #15281: Stack one single SquashFS diff when upgrading added
#11 Updated by intrigeri 2018-08-18 08:52:22
- related to
Feature #9373: Make tails-iuk support overlayfs added
#12 Updated by intrigeri 2018-08-18 08:57:34
- % Done changed from 0 to 10
- Feature Branch set to feature/11131-endless-upgrade
The corresponding design doc changes were made on feature/11131-endless-upgrade
(tails.git).
IMO we should ship this in Tails 4.0 along with Feature #15281 and Feature #8415, to reduce the number of times we break automatic upgrades.
#13 Updated by intrigeri 2018-08-18 08:59:06
- Subject changed from Research how the incremental upgrade process could use less RAM to Have the incremental upgrade process use less RAM
- Type of work changed from Research to Code
#14 Updated by Anonymous 2018-08-19 06:06:42
- related to
Feature #8415: Migrate from aufs to overlayfs added
#15 Updated by Anonymous 2018-08-19 06:07:20
- Target version set to SponsorT_2016_Internal
Setting target version accordingly.
#16 Updated by Anonymous 2018-08-19 06:07:53
- Target version changed from SponsorT_2016_Internal to Tails_4.0
Correct target version…
#17 Updated by intrigeri 2018-08-19 08:26:44
- Target version deleted (
Tails_4.0)
u wrote:
> Setting target version accordingly.
I think this is slightly premature. This may transitively block Feature #8415 but 1. only if we care about 2GB systems in this context, which we did not decide yet; 2. Feature #8415 was added on our 2018 roadmap only because someone was ready to do it on their volunteer time, which did not happen, so I’d rather not have this on my 4.0 plate until we update the status of this big pile of inter-related tickets at the summit. Spoiler alert: I’ll propose they’re explicitly added to the FT plate and we stop waiting for volunteer time to make things happen :)
#18 Updated by Anonymous 2018-08-19 09:14:31
Ok, I was referring to what you said in #note-14.
#19 Updated by intrigeri 2018-09-12 06:22:53
- blocks
Feature #15392: Core work 2018Q2 → 2018Q3: User experience added
#20 Updated by intrigeri 2018-09-12 06:26:37
- Assignee changed from intrigeri to sajolida
- Target version set to Tails_3.10.1
- QA Check set to Info Needed
Dear sajolida, how bad would it be if as a consequence of Feature #15281, we stopped supporting automatic upgrades on systems with only 2GB of RAM? Perhaps it’s worth computing stats about RAM installed on systems sending WhisperBack reports (shout if you need help cooking a regexp).
Depending on the answer, we’ll block on this (or not) for Feature #15281, which itself blocks Feature #8415, that the FT would like to complete in 2018Q4.
#21 Updated by intrigeri 2018-09-12 09:48:24
intrigeri wrote:
> Perhaps it’s worth computing stats about RAM installed on systems sending WhisperBack reports (shout if you need help cooking a regexp).
Actually we already have internal.git:stats/whisperback_scripts/ram.pl
:)
#22 Updated by sajolida 2018-09-14 13:31:46
- Assignee changed from sajolida to intrigeri
These are the reports I could get (before being hit by Bug #15955):
298 2017Q3.ram
257 2017Q4.ram
46 2018Q1.ram
0 2018Q2.ram
601 total
- count: 600
- variance: 27979743180995.6
- standard deviation: 5289588.18633319
- mean: 7.0G
- per-quartiles:
* min: 1.8G
* 25th percentile: 3.8G
* 50th percentile (median): 5.8G
* 75th percentile: 7.8G
* max: 32G
- frequency distribution:
* 8.8% under 2.1G
* 0.2% between 2.1G and 2.4G
* 0.3% between 2.4G and 2.7G
* 6.0% between 2.7G and 3.0G
* 0.3% between 3.0G and 3.3G
* 3.0% between 3.3G and 3.6G
* 21.0% between 3.6G and 3.9G
* 8.5% between 3.9G and 4.2G
* 0.2% between 4.2G and 5.1G
* 0.3% between 5.1G and 5.4G
* 0.7% between 5.4G and 5.7G
* 3.0% between 5.7G and 6.0G
* 0.7% between 6.0G and 6.8G
* 0.2% between 6.8G and 7.1G
* 0.8% between 7.1G and 7.4G
* 6.0% between 7.4G and 7.7G
* 23.2% between 7.7G and 8.0G
* 0.2% between 8.0G and 9.8G
* 0.2% between 9.8G and 12G
* 0.5% between 12G and 12G
* 1.2% between 12G and 12G
* 0.5% between 12G and 16G
* 0.5% between 16G and 16G
* 12.0% between 16G and 16G
* 0.3% between 16G and 16G
* 0.2% between 16G and 23G
* 0.3% between 23G and 24G
* 1.0% between 24G and 32G
Requesting 3GB might break upgrades for 15% of our audience. That’s assuming that the hardware of people sending WhisperBack reports is representative from our general audience (or intended audience) but, if I were to guess I would say that they are a bit more tech-savvy, rich, and with better hardware.
15% seems a lot to me.
But I can compute stats for after 3.6.2 (April) once I solved Bug #15955. As the progression itself is also an interesting data to see how fast this hardware is dying.
#23 Updated by intrigeri 2018-09-14 14:19:33
> These are the reports I could get (before being hit by Bug #15955):
Thanks!
> 15% seems a lot to me.
Absolutely.
> But I can compute stats for after 3.6.2 (April) once I solved Bug #15955. As the progression itself is also an interesting data to see how fast this hardware is dying.
Indeed, next time it would be interesting to have separate stats per quarter, e.g. since 2017Q3.
#24 Updated by intrigeri 2018-09-15 07:40:13
- Target version changed from Tails_3.10.1 to Tails_3.11
- QA Check deleted (
Info Needed)
#25 Updated by sajolida 2018-09-15 20:43:28
Here you go. It’s weird that the number of people < 2GB goes back up in 2018Q2 so I added stats for what we have already in 2018Q3.
2017Q3
- count: 297
- variance: 21261724312256.8
- standard deviation: 4611043.73350077
- mean: 6.4G
- per-quartiles:
* min: 1.9G
* 25th percentile: 3.8G
* 50th percentile (median): 4.0G
* 75th percentile: 7.8G
* max: 32G
- frequency distribution:
* 11.4% under 2.2G
* 1.0% between 2.2G and 2.8G
* 6.1% between 2.8G and 3.0G
* 0.3% between 3.0G and 3.3G
* 3.0% between 3.3G and 3.6G
* 25.6% between 3.6G and 3.9G
* 4.0% between 3.9G and 4.2G
* 0.3% between 4.2G and 5.1G
* 1.3% between 5.1G and 5.7G
* 3.0% between 5.7G and 6.0G
* 1.3% between 6.0G and 6.9G
* 0.3% between 6.9G and 7.2G
* 1.0% between 7.2G and 7.5G
* 8.1% between 7.5G and 7.7G
* 19.5% between 7.7G and 8.0G
* 0.3% between 8.0G and 11G
* 1.0% between 11G and 12G
* 0.7% between 12G and 12G
* 0.7% between 12G and 16G
* 0.7% between 16G and 16G
* 9.4% between 16G and 16G
* 0.3% between 16G and 23G
* 0.3% between 23G and 32G
2017Q4
- count: 257
- variance: 34981401127129.5
- standard deviation: 5914507.68256577
- mean: 7.7G
- per-quartiles:
* min: 1.8G
* 25th percentile: 3.8G
* 50th percentile (median): 7.5G
* 75th percentile: 7.9G
* max: 32G
- frequency distribution:
* 5.8% under 2.1G
* 0.4% between 2.1G and 2.4G
* 0.4% between 2.4G and 2.7G
* 5.4% between 2.7G and 3.0G
* 0.4% between 3.0G and 3.3G
* 3.1% between 3.3G and 3.6G
* 19.1% between 3.6G and 3.9G
* 9.3% between 3.9G and 4.2G
* 0.8% between 4.2G and 5.4G
* 3.1% between 5.4G and 6.0G
* 0.8% between 6.0G and 7.4G
* 6.6% between 7.4G and 7.7G
* 23.3% between 7.7G and 8.0G
* 0.4% between 8.0G and 9.8G
* 1.6% between 9.8G and 12G
* 0.4% between 12G and 16G
* 0.4% between 16G and 16G
* 16.7% between 16G and 16G
* 1.9% between 16G and 32G
2018Q1
- count: 284
- variance: 33443635436891.2
- standard deviation: 5783047.2449126
- mean: 7.3G
- per-quartiles:
* min: 1.9G
* 25th percentile: 3.8G
* 50th percentile (median): 7.7G
* 75th percentile: 7.8G
* max: 47G
- frequency distribution:
* 5.6% under 2.3G
* 0.7% between 2.3G and 2.8G
* 3.9% between 2.8G and 3.2G
* 2.8% between 3.2G and 3.7G
* 27.8% between 3.7G and 4.1G
* 0.4% between 4.1G and 5.0G
* 3.9% between 5.0G and 5.9G
* 0.7% between 5.9G and 7.3G
* 4.9% between 7.3G and 7.7G
* 34.9% between 7.7G and 8.2G
* 1.8% between 8.2G and 12G
* 0.4% between 12G and 13G
* 0.4% between 13G and 16G
* 9.5% between 16G and 16G
* 0.7% between 16G and 24G
* 1.4% between 24G and 32G
* 0.4% between 32G and 47G
2018Q2
- count: 212
- variance: 39142858705476.2
- standard deviation: 6256425.39358348
- mean: 7.3G
- per-quartiles:
* min: 1.9G
* 25th percentile: 3.8G
* 50th percentile (median): 5.8G
* 75th percentile: 7.8G
* max: 47G
- frequency distribution:
* 9.0% under 2.3G
* 1.4% between 2.3G and 2.8G
* 2.8% between 2.8G and 3.2G
* 5.7% between 3.2G and 3.7G
* 27.8% between 3.7G and 4.1G
* 0.5% between 4.1G and 5.5G
* 3.8% between 5.5G and 5.9G
* 0.9% between 5.9G and 6.8G
* 0.5% between 6.8G and 7.3G
* 4.2% between 7.3G and 7.7G
* 25.9% between 7.7G and 8.2G
* 3.8% between 8.2G and 12G
* 0.5% between 12G and 16G
* 10.4% between 16G and 16G
* 0.9% between 16G and 24G
* 1.4% between 24G and 32G
* 0.5% between 32G and 47G
2018Q3
- count: 167
- variance: 38635795702148.2
- standard deviation: 6215769.92030337
- mean: 8.6G
- per-quartiles:
* min: 1.8G
* 25th percentile: 3.9G
* 50th percentile (median): 7.7G
* 75th percentile: 16G
* max: 32G
- frequency distribution:
* 4.2% under 2.0G
* 0.6% between 2.0G and 2.9G
* 2.4% between 2.9G and 3.5G
* 9.6% between 3.5G and 3.8G
* 18.6% between 3.8G and 4.1G
* 0.6% between 4.1G and 5.6G
* 1.8% between 5.6G and 5.9G
* 0.6% between 5.9G and 6.2G
* 0.6% between 6.2G and 6.8G
* 0.6% between 6.8G and 7.4G
* 4.2% between 7.4G and 7.7G
* 28.7% between 7.7G and 8.0G
* 0.6% between 8.0G and 9.7G
* 0.6% between 9.7G and 11G
* 0.6% between 11G and 12G
* 1.8% between 12G and 16G
* 18.0% between 16G and 16G
* 3.6% between 16G and 16G
* 2.4% between 16G and 32G
#26 Updated by sajolida 2018-09-17 17:14:05
- related to
Bug #15955: whisperback_scripts/decrypt.sh doesn't handle Memory Hole added
#27 Updated by intrigeri 2018-09-18 08:31:50
- Assignee changed from intrigeri to sajolida
- QA Check set to Info Needed
> Here you go.
Thanks!
> It’s weird that the number of people < 2GB goes back up in 2018Q2
Well, on a sample of ~250 data points, +/- 12 reports over 3 months is sufficient to cause a 5% difference so it’s barely statistically significant.
In passing, I’ve noticed a potential problem with our stats method: we’re counting reports sent from Tails running in a VM. Our virtualization doc reads “allocate at least 2048 MB of RAM” so I suspect many VM users will have allocated exactly 2GB. This might skew our results because I guess that most such users could actually allocate 3GB of RAM to the VM just as well. If I added an option to the script that extracts these stats for ignoring reports sent from VMs, would you re-run it?
#28 Updated by sajolida 2018-09-25 22:52:15
- File 2017Q3.stats added
- File 2017Q4.stats added
- File 2018Q1.stats added
- File 2018Q2.stats added
- File 2018Q3.stats added
Your script takes as input a list of RAM amount so I don’t think we can add to its logic to ignore virtual machines. But I wrote a oneline (for real!) to delete such reports from a set of decrypted reports. See bf2328a.
It’s not 100% perfect but it should be good enough. Patches are welcome!
And I run again your script. See results in attachment.
Summary, amount of reports with less than 3 GB:
2017Q3 | 283 | 17% |
2017Q4 | 243 | 5% |
2018Q1 | 277 | 8% |
2018Q2 | 204 | 11% |
2018Q3 | 186 | 5% |
So unfortunately, removing virtual machines doesn’t really change the results…
#29 Updated by sajolida 2018-09-30 12:58:07
- related to
Bug #16015: Change hardware requirement for using Tails? added
#30 Updated by intrigeri 2018-10-11 09:13:54
- Assignee changed from sajolida to intrigeri
- QA Check deleted (
Info Needed)
sajolida wrote:
> Your script takes as input a list of RAM amount so I don’t think we can add to its logic to ignore virtual machines.
I don’t understand why it would be hard to support a --exclude-vm
option. Anyway:
> So unfortunately, removing virtual machines doesn’t really change the results…
OK. So I’ll adjust metadata so this ticket blocks what it should. Thanks!
#31 Updated by intrigeri 2018-10-11 09:14:15
- related to deleted (
)Feature #15281: Stack one single SquashFS diff when upgrading
#32 Updated by intrigeri 2018-10-11 09:14:21
- Parent task set to
Feature #15281
#33 Updated by intrigeri 2018-10-11 09:14:40
- blocks
Feature #15506: Core work 2018Q4: Foundations Team added
#34 Updated by sajolida 2018-10-11 18:40:23
> sajolida wrote:
>> Your script takes as input a list of RAM amount so I don’t think we can add to its logic to ignore virtual machines.
> > I don’t understand why it would be hard to support a --exclude-vm
option. Anyway:
Because your script takes as input a list of RAM amount, formatted like
that:
3787678987
4763456786
8545678987
8365456787
So at that point we can’t know which amount of RAM is from a VM and
which is from real hardware. A solution would be to improve your script
to take a list of decrypt WhisperBack reports and have an —exclude-vm
option.
See also the comment on top of
internal.git:stats/whisperback_scripts/ram.pl.
>> So unfortunately, removing virtual machines doesn’t really change the results…
>
> OK. So I’ll adjust metadata so this ticket blocks what it should. Thanks!
You’re welcome :)
#35 Updated by sajolida 2018-10-29 14:25:02
- blocked by deleted (
)Feature #15392: Core work 2018Q2 → 2018Q3: User experience
#36 Updated by intrigeri 2018-11-05 14:45:47
- Target version changed from Tails_3.11 to Tails_3.12
#37 Updated by intrigeri 2018-11-06 15:04:45
- Target version changed from Tails_3.12 to Tails_3.13
#38 Updated by sajolida 2018-11-13 09:41:39
I found this awesome website reporting aggregated telemetry data from Firefox users:
It includes hardware data and users with 2GB of RAM account for 11.4% of Firefox users.
#39 Updated by intrigeri 2018-11-15 19:28:48
> I found this awesome website reporting aggregated telemetry data from Firefox users:
> It includes hardware data and users with 2GB of RAM account for 11.4% of Firefox users.
Excellent, thanks :)
#40 Updated by intrigeri 2018-11-20 15:09:28
- Parent task changed from
Feature #15281toFeature #15283
#41 Updated by intrigeri 2018-12-02 21:55:14
- blocks
Feature #15507: Core work 2019Q1: Foundations Team added
#42 Updated by intrigeri 2018-12-02 21:55:34
- blocked by deleted (
)Feature #15506: Core work 2018Q4: Foundations Team
#43 Updated by sajolida 2019-01-18 15:43:31
- related to Feature #5502: Next time we bump RAM requirements: notify user at runtime if RAM requirements are not met added
#44 Updated by intrigeri 2019-01-25 16:32:03
- Target version changed from Tails_3.13 to 2019
#45 Updated by intrigeri 2019-02-06 14:10:32
- blocks Feature #16209: Core work: Foundations Team added
#46 Updated by intrigeri 2019-02-06 14:10:35
- blocked by deleted (
)Feature #15507: Core work 2019Q1: Foundations Team
#47 Updated by intrigeri 2019-04-05 16:08:01
- Assignee deleted (
intrigeri)
#48 Updated by intrigeri 2019-04-14 07:29:10
- Assignee set to intrigeri
Most of the work will happen in tails-iuk so it would be rather wasteful to ask anyone else to do it.
#49 Updated by intrigeri 2019-08-31 17:10:33
Hi @sajolida,
this might be interesting to you as you’ve been involved in this discussion in the past, I know you care about the upgrade UX… and I’ve noticed a problematic side effect of a change we’ve done to improve UX.
intrigeri wrote:
> Dear sajolida, how bad would it be if as a consequence of Feature #15281, we stopped supporting automatic upgrades on systems with only 2GB of RAM?
>
> Depending on the answer, we’ll block on this (or not) for […]
While updating https://tails.boum.org/blueprint/Endless_upgrades/#single-squashfs-diff, I discovered that in practice, we’ve already stopped supporting automatic upgrades on systems with 2GB of RAM whenever the user has skipped upgrading to one of our releases, when we started advertising upgrades from N to N+2. When we did this change:
- We achieved the intended goal (avoid the need to apply 2 upgrades in a row when the user has skipped one) for users with 3G of RAM or more.
- But at the same time, we made automatic upgrades impossible to apply for a user in the same situation (having skipped upgrading to the previous release) who has 2GB of RAM: because our N→N+2 IUKs are generally too big for them to apply.
This, combined with the memory usage improvements I’ve done in Bug #12092, means that in practice, doing the work this ticket is about will:
- for users with 3GB of RAM or more, who occasionally skip an upgrade: avoid a regression that
Feature #15281might introduce at some point. I’m writing might as anonym computed data for the 2.x series, which shows this problem would never have happened there, by a large margin; but we don’t have data handy for the 3.x series. - for users with 2GB of RAM, who occasionally skip an upgrade: fix a regression that we’ve already introduced a while ago
- for users with 2GB of RAM, who never skip any upgrade: fix a regression that
Feature #15281would definitely introduce
All these outcomes are nice and I still think we should do this work. But as we can see, the expected benefits are not as great as I thought when I asked you the question about “[stop] supporting automatic upgrades on systems with only 2GB of RAM”: we’ve already done so in practice, unless these users never skip any Tails release.
Now, whether we should keep shipping N→N+2 incremental upgrades in the meantime, until this ticket is solved, is another question. It boils down to who we want to optimize for. I think it’s OK that we continue optimizing UX for the 90-95% who have 3+ GB of RAM who occasionally skip a release, even if this makes the upgrade UX worse for the 5-10% ones with only 2GB of RAM who occasionally skip a release. If you disagree, please file a dedicated ticket where we can discuss it :)
#50 Updated by sajolida 2019-09-04 11:12:53
> I know you care about the upgrade UX…
It’s not me, it’s our users :)
> If you disagree, please file a dedicated ticket where we can discuss it :)
I agree with optimizing for people with 3GB of RAM.
#51 Updated by intrigeri 2019-11-24 11:58:39
- Feature Branch changed from feature/11131-endless-upgrade to feature/15281-single-squashfs-diff, iuk:feature/11131-endless-upgrade, iuk:feature/15281-single-squashfs-diff
intrigeri wrote:
> The corresponding design doc changes were made on feature/11131-endless-upgrade
(tails.git).
… and I’ve ported them to feature/15281-single-squashfs-diff
.
#52 Updated by intrigeri 2019-11-27 08:38:19
- Target version changed from 2019 to Tails_4.2
#53 Updated by intrigeri 2019-12-01 11:16:56
- Priority changed from Normal to High
#54 Updated by intrigeri 2019-12-18 17:17:56
- Description updated
#55 Updated by intrigeri 2019-12-18 18:05:56
- Description updated
#56 Updated by intrigeri 2019-12-18 19:11:33
- Status changed from In Progress to Resolved
I’m done here. I’ll track the next steps via the parent ticket and Feature #15286.
tl;dr:
- to check for upgrades, thanks to
Feature #17152, 200 MiB of RAM is enough (vs. 300 previously) - to download and install an IUK, thanks to the work done here on IUK format v2, one needs about 364 MiB of RAM + the size of the IUK (vs twice the size of the IUK previously); this will make a big difference with bigger IUKs, which is why we wanted to work on this.
#57 Updated by intrigeri 2019-12-18 20:02:13
- Assignee deleted (
intrigeri)