Feature #12409
Reconsider the need for publishing Vagrant baseboxes
100%
Description
After a nice discussion with Ximin, anonym and others last week, our conclusion was that we should simply not publish nor host any Vagrant basebox at all, but instead make it trivial to create your own.
Advantages:
- one less binary blob used as input of the build => huge increase of the value of reproducible ISO builds
- with the basebox defined as code, if I need a new basebox for some new branch I’ll just have to change code, and not manually build + upload a basebox to some place
- no need to do
Feature #11982+ the repetitive work it commits us to do forever - less infra development work
- less upload bandwidth required on our servers
Drawbacks:
- some more build system coding to do, to automate the basebox creation process
- non-Debian build hosts might be a problem, e.g. if some tools used to create baseboxes are not readily available on other distros
- more storage needed on isobuilders (to build the baseboxes)
If we decide to go this way, it changes quite a lot what’s on anonym’s and bertagaz’ plate, so let’s try to reach a prompt conclusion.
Subtasks
History
#1 Updated by bertagaz 2017-04-03 11:04:36
intrigeri wrote:
> After a nice discussion with Ximin, anonym and others last week, our conclusion was that we should simply not publish nor host any Vagrant basebox at all, but instead make it trivial to create your own.
Interesting. This raises some questions regarding how it would work in our infra (Jenkins) and other things:
We’re already speaking of reducing the feedback loop. Our ISO build time is already growing, and if the isobuilders need to build the basebox before building the ISO, this will certainly grow once more. Or maybe the idea is to have some sort of mechanism in our infra that would build and share the baseboxes for all our isobuilders?
Regarding the advantages:
intrigeri wrote:
> * less infra development work
Hosting and sharing the baseboxes on our infra is probably not so much dev work. But this is still not yet sure considering this part of the work hasn’t been progressing a lot for now. It depends how we wil implement the upload of baseboxes. This advantages also is one if implementing this proposal (build system coding) cost less dev work.
intrigeri wrote:
> * less resources requirements on our servers (storage space, upload bandwidth)
The most resources we’ll need for vagrant will certainly be the APT snapshots that this proposal and our previous design will both require anyway.
Hosting the basebox, in the current state of the blueprint, will probably not be that huge (considering we decided not to archive them for now): we’ll have 1 basebox, and twice a year when we’ll update it we’ll have to host 2 baseboxes temporally. It may happen from time to time (when we update our build env mainly) that we host at least another one. We may also host one more when we work on upgrading Tails to the next Debian version. So that’s potentially at most 4 to < 10 baseboxes to host. The basebox size is 200-300 Gb, so that’s not so much disk space in the end.
So I wonder how much interesting this proposal is in the end.
#2 Updated by anonym 2017-04-03 13:51:46
- Assignee changed from anonym to bertagaz
- QA Check set to Info Needed
bertagaz wrote:
> intrigeri wrote:
> > After a nice discussion with Ximin, anonym and others last week, our conclusion was that we should simply not publish nor host any Vagrant basebox at all, but instead make it trivial to create your own.
>
> Interesting. This raises some questions regarding how it would work in our infra (Jenkins) and other things:
>
> We’re already speaking of reducing the feedback loop. Our ISO build time is already growing, and if the isobuilders need to build the basebox before building the ISO, this will certainly grow once more. Or maybe the idea is to have some sort of mechanism in our infra that would build and share the baseboxes for all our isobuilders?
Note that Vagrant caches baseboxes so the basebox would only be built if it is missing => worst case: each isobuilder will build the basebox the first time that particular basebox is encountered. On my system building it takes ~20 minutes so let’s assume the same for the isobuilders. Given that the plan is to release one basebox for each Tails release = once every six months = 52/6 times per year, we’ll only lose 52/6*20 minutes per year and isobuilder, which sounds like something we don’t have to try to optimize away.
Does this make more sense to you now?
Building the basebox requires quite a bit of disk space, though, since it will write the full size of the disk image (currently 20 GiB) so I think making that available might actually be the only potential adjustment that will be needed on the isobuilders.
#3 Updated by intrigeri 2017-04-03 16:39:10
> So I wonder how much interesting this proposal is in the end.
Meta: I’ve listed all the pros and cons I could think of, but really the one I care about is “one less binary blob used as input of the build => huge increase of the value of reproducible ISO builds”. Sorry I didn’t make this clear enough initially! The main goal of this proposal is definitely not saving resources or infra work. All the other pros & cons are pretty minor things, that are worth taking into account for sure, but are not game changers compared to that single one. So at this point I’d like to focus on discussing whether we think the main goal is a relevant one, and then discuss if what’s required to reach it fits in our budget.
#4 Updated by bertagaz 2017-04-04 09:31:09
- Assignee changed from bertagaz to anonym
anonym wrote:
> Note that Vagrant caches baseboxes so the basebox would only be built if it is missing => worst case: each isobuilder will build the basebox the first time that particular basebox is encountered. On my system building it takes ~20 minutes so let’s assume the same for the isobuilders. Given that the plan is to release one basebox for each Tails release = once every six months = 52/6 times per year, we’ll only lose 52/6*20 minutes per year and isobuilder, which sounds like something we don’t have to try to optimize away.
I didn’t think about this vagrant basebox caching, thanks to point that.
It sure is an argument against hosting our baseboxes, also because it’s not worth bothering about hosting baseboxes if they get cached on every isobuilders anyway.
OTOH, this will complicate a bit things in Jenkins: with the current design, we don’t care much about this caching as the file would be hosting nearby on Lizard, so we can just remove all cached baseboxes at the end of every build, as downloading them is fast. That would be the easy way not to have the vagrant basebox cache grow too much in our infra.
With this new proposal, we’ll have to find a way to remove old cached baseboxes only. But also to take into account that if we have a branch that needs a specific new basebox (which will be the case at least when we update our build basebox), we need to keep this one in the cache, as well as the current official one. This is doable, but not so easy.
> Building the basebox requires quite a bit of disk space, though, since it will write the full size of the disk image (currently 20 GiB) so I think making that available might actually be the only potential adjustment that will be needed on the isobuilders.
Right! So one more drawback is that it costs a bit more in term of disk space than what we planed with our current blueprint.
intrigeri wrote:
> Meta: I’ve listed all the pros and cons I could think of, but really the one I care about is “one less binary blob used as input of the build => huge increase of the value of reproducible ISO builds”. Sorry I didn’t make this clear enough initially! The main goal of this proposal is definitely not saving resources or infra work. All the other pros & cons are pretty minor things, that are worth taking into account for sure, but are not game changers compared to that single one. So at this point I’d like to focus on discussing whether we think the main goal is a relevant one, and then discuss if what’s required to reach it fits in our budget.
Ok, I surely didn’t get that subtility. I get the “increase of value because one less binary blob” argument, even if I wonder how strong it is given we’ll still ship tons of binary blobs in the form of Debian packages used to build this basebox anyway.
#5 Updated by anonym 2017-04-04 10:18:11
- Assignee changed from anonym to bertagaz
bertagaz wrote:
> anonym wrote:
> > Note that Vagrant caches baseboxes so the basebox would only be built if it is missing => worst case: each isobuilder will build the basebox the first time that particular basebox is encountered. On my system building it takes ~20 minutes so let’s assume the same for the isobuilders. Given that the plan is to release one basebox for each Tails release = once every six months = 52/6 times per year, we’ll only lose 52/6*20 minutes per year and isobuilder, which sounds like something we don’t have to try to optimize away.
>
> I didn’t think about this vagrant basebox caching, thanks to point that.
> It sure is an argument against hosting our baseboxes, also because it’s not worth bothering about hosting baseboxes if they get cached on every isobuilders anyway.
>
> OTOH, this will complicate a bit things in Jenkins: with the current design, we don’t care much about this caching as the file would be hosting nearby on Lizard, so we can just remove all cached baseboxes at the end of every build, as downloading them is fast. That would be the easy way not to have the vagrant basebox cache grow too much in our infra.
>
> With this new proposal, we’ll have to find a way to remove old cached baseboxes only. But also to take into account that if we have a branch that needs a specific new basebox (which will be the case at least when we update our build basebox), we need to keep this one in the cache, as well as the current official one. This is doable, but not so easy.
As a first iteration we can add an option (that we set on Jenkins) that opts-in to download the basebox instead of building it, i.e. we keep the exact, current behavior on Jenkins, but the default for everyone else is to also build the basebox. Then in a second+ iteration we also move Jenkins to building the basebox, after we’ve figured out how we want to solve it.
BTW, here’s another advantage that intrigeri had missed: with the basebox defined as code, if I need a new basebox for some new branch I’ll just have to change code, and not manually build + upload a basebox to some place. I reckon this will save a couple of hours of developer time each year purely for the boxes we have to do for release work, but probably more for cases where we need a new one for some feature branch. I value this point over all the advantages intrigeri listed (except the one about “one less binary blob”) since it would, for me with my developer hat on, actually be a noticeable improvement.
Some ideas to move in this direction:
- It would be very easy to implement a Rake task for cleaning up old baseboxes, say those older than six months. isobuilders could then run this before/after each build, or with daily cron job, or similar. Of course,
~/.vagrant.d
would have to persist between builds with this approach. - How crazy would it be to introduce yet another Jenkins job type for ensuring the existence of the required basebox, and then make all build jobs depend on it? It could be done by a different builder VM (baseboxbuilder) if that helps. These baseboxes would then be stored somewhere only accessible by the isobuilders, and we’d provide an option to change the base URL of where to download baseboxes from. Then the situation for the isobuilders will be identical to now (e.g. no need to persist
~/.vagrant.d
between builds).
What do you think?
> intrigeri wrote:
> > Meta: I’ve listed all the pros and cons I could think of, but really the one I care about is “one less binary blob used as input of the build => huge increase of the value of reproducible ISO builds”. Sorry I didn’t make this clear enough initially! The main goal of this proposal is definitely not saving resources or infra work. All the other pros & cons are pretty minor things, that are worth taking into account for sure, but are not game changers compared to that single one. So at this point I’d like to focus on discussing whether we think the main goal is a relevant one, and then discuss if what’s required to reach it fits in our budget.
>
> Ok, I surely didn’t get that subtility. I get the “increase of value because one less binary blob” argument, even if I wonder how strong it is given we’ll still ship tons of binary blobs in the form of Debian packages used to build this basebox anyway.
IMHO it is still a big step towards “you only have to trust Debian when building Tails”, which I feel is super important. It’s only once we have achieved that goal that I truly can sleep at night, not worrying about my machine being compromised! :)
#6 Updated by bertagaz 2017-04-04 10:55:25
- Assignee changed from bertagaz to anonym
anonym wrote:
> As a first iteration we can add an option (that we set on Jenkins) that opts-in to download the basebox instead of building it, i.e. we keep the exact, current behavior on Jenkins, but the default for everyone else is to also build the basebox. Then in a second+ iteration we also move Jenkins to building the basebox, after we’ve figured out how we want to solve it.
Yes, that would be a way to ease the move.
> BTW, here’s another advantage that intrigeri had missed: with the basebox defined as code, if I need a new basebox for some new branch I’ll just have to change code, and not manually build + upload a basebox to some place. I reckon this will save a couple of hours of developer time each year purely for the boxes we have to do for release work, but probably more for cases where we need a new one for some feature branch. I value this point over all the advantages intrigeri listed (except the one about “one less binary blob”) since it would, for me with my developer hat on, actually be a noticeable improvement.
Good point.
> Some ideas to move in this direction:
>
> # It would be very easy to implement a Rake task for cleaning up old baseboxes, say those older than six months. isobuilders could then run this before/after each build, or with daily cron job, or similar. Of course, ~/.vagrant.d
would have to persist between builds with this approach.
> # How crazy would it be to introduce yet another Jenkins job type for ensuring the existence of the required basebox, and then make all build jobs depend on it? It could be done by a different builder VM (baseboxbuilder) if that helps. These baseboxes would then be stored somewhere only accessible by the isobuilders, and we’d provide an option to change the base URL of where to download baseboxes from. Then the situation for the isobuilders will be identical to now (e.g. no need to persist ~/.vagrant.d
between builds).
>
> What do you think?
I like your first idea, which solves my concerns in an easy way (and could avoid implementing this proposal in Jenkins with several iterations). The second one seems over-complicated to me, but we can maybe keep it somewhere if the building of the basebox before building of the ISO ends up being too costy in our infra.
> anonym wrote:
> > bertagaz wrote:
> > Ok, I surely didn’t get that subtility. I get the “increase of value because one less binary blob” argument, even if I wonder how strong it is given we’ll still ship tons of binary blobs in the form of Debian packages used to build this basebox anyway.
>
> IMHO it is still a big step towards “you only have to trust Debian when building Tails”, which I feel is super important. It’s only once we have achieved that goal that I truly can sleep at night, not worrying about my machine being compromised! :)
Ok, got it!
With your (fast) replies, I’m much more convinced of the interest and implementation possibility of this proposal. :)
So if we agree on that, we need to decide what to do next to get it done, and by who.
#7 Updated by anonym 2017-04-05 09:03:39
- Assignee changed from anonym to intrigeri
bertagaz wrote:
> anonym wrote:
> > As a first iteration we can add an option (that we set on Jenkins) that opts-in to download the basebox instead of building it, i.e. we keep the exact, current behavior on Jenkins, but the default for everyone else is to also build the basebox. Then in a second+ iteration we also move Jenkins to building the basebox, after we’ve figured out how we want to solve it.
>
> Yes, that would be a way to ease the move.
intrigeri, do you agree? This deviates from the original plan, but at least one has to opt-in for the deviating behavior.
> > Some ideas to move in this direction:
> >
> > # It would be very easy to implement a Rake task for cleaning up old baseboxes, say those older than six months. isobuilders could then run this before/after each build, or with daily cron job, or similar. Of course, ~/.vagrant.d
would have to persist between builds with this approach.
> > # How crazy would it be to introduce yet another Jenkins job type for ensuring the existence of the required basebox, and then make all build jobs depend on it? It could be done by a different builder VM (baseboxbuilder) if that helps. These baseboxes would then be stored somewhere only accessible by the isobuilders, and we’d provide an option to change the base URL of where to download baseboxes from. Then the situation for the isobuilders will be identical to now (e.g. no need to persist ~/.vagrant.d
between builds).
> >
> > What do you think?
>
> I like your first idea, which solves my concerns in an easy way (and could avoid implementing this proposal in Jenkins with several iterations).
Implemented in commit:0ecf045d4ae06d8ad187a55522e5db967f378681 (in the wip/11972-use-vagrant-in-jenkins
branch).
> The second one seems over-complicated to me, but we can maybe keep it somewhere if the building of the basebox before building of the ISO ends up being too costy in our infra.
Got it!
> So if we agree on that, we need to decide what to do next to get it done, and by who.
I’ll deal with:
- make the Rake task
basebox:create
import the resulting base box. - make the Rake task
build
depend onbasebox:create
, except if the build optionfetchbaseboxes
is set. - make
fetchbaseboxes
the default if we detect we are on Jenkins (no need for puppet stuff).
I believe the only thing that remains is:
- make each (libvirt) isobuilder run
rake basebox:clean_old
regularly.
So, intrigeri, do you agree with this plan?
#8 Updated by intrigeri 2017-04-05 09:13:56
bertagaz wrote:
> I get the “increase of value because one less binary blob” argument, even if I wonder how strong it is given we’ll still ship tons of binary blobs in the form of Debian packages used to build this basebox anyway.
Sure, this is a valid concern wrt. the situation we’re in today. Thankfully most of these packages can already be built reproducibly, and I’m hopeful that this reaches 100% coverage during the Buster cycle. So thinking long term strategy, the fact that the WIP to make the other inputs of our build system reproducible currenty covers only 94% of the Debian archive (91.5% of what we ship in the ISO, and an unknown percentage of what we ship in the build basebox) seems to be a weak justification for adding another large binary blob to the list of inputs (without any plan to make it build reproducibly). Now, if there were solid plans to build the basebox reproducibly, then it would be an entirely different story: then the only things we would have to consider wrt. hosting baseboxes or not are the cost of implementation, performance impact, and resources requirements. But that’s not part of the current plan, so “avoid introducing one large unreproducible binary blob in the list of inputs” still looks like a game changer to me, that IMO is totally worth some minor additional implementation work / resources cost / performance hit (if any).
#9 Updated by intrigeri 2017-04-05 09:16:01
anonym wrote:
> Building the basebox requires quite a bit of disk space, though, since it will write the full size of the disk image (currently 20 GiB)
I’m surprised a sparse file isn’t used for this.
#10 Updated by intrigeri 2017-04-05 09:18:27
bertagaz wrote:
> Right! So one more drawback is that it costs a bit more in term of disk space than what we planed with our current blueprint.
If there’s been some research done on the resources requirements, please update Feature #12002 accordingly as I see no such info there.
#11 Updated by intrigeri 2017-04-05 09:25:13
- Description updated
#12 Updated by intrigeri 2017-04-05 09:26:32
- Description updated
#13 Updated by intrigeri 2017-04-05 09:48:17
- Assignee changed from intrigeri to anonym
Hi!
anonym:
> bertagaz wrote:
>> anonym wrote:
>> > As a first iteration we can add an option (that we set on Jenkins) that opts-in to download the basebox instead of building it, i.e. we keep the exact, current behavior on Jenkins, but the default for everyone else is to also build the basebox. Then in a second+ iteration we also move Jenkins to building the basebox, after we’ve figured out how we want to solve it.
>>
>> Yes, that would be a way to ease the move.
> intrigeri, do you agree? This deviates from the original plan, but at least one has to opt-in for the deviating behavior.
Perhaps I misunderstood something, but I don’t like it much. First, with this plan we would still have to deal with generating + uploading + hosting baseboxes (and then probably drop the whole thing later so it’s wasting some time). Second, we’ll have to clean up the baseboxes cache on the isobuilders anyway so I’m not sure what it really buys us. And third, more importantly, my understanding of what follows is that it’s actually not needed:
>> > # It would be very easy to implement a Rake task for cleaning up old baseboxes, say those older than six months. isobuilders could then run this before/after each build, or with daily cron job, or similar. Of course, ~/.vagrant.d
would have to persist between builds with this approach.
[…]
>> I like your first idea, which solves my concerns in an easy way (and could avoid implementing this proposal in Jenkins with several iterations).
… is that bertagaz was arguing in favour of doing this (garbage collect old baseboxes as part of the build on Jenkins isobuilders) in order to skip the 1st iteration, i.e. avoid having to use fetchbaseboxes
on Jenkins, as it was meant to be a temporary workaround for something you proposed a better solution since. If I got bertagaz’ point right, I totally agree with it, and think we should skip the fetchbaseboxes
iteration entirely, and directly aim for this nice way of avoiding storing too many baseboxes forever.
Also, note that even if we did not garbage collect old baseboxes on isobuilders, it’ll still eat no more than 1 GiB/year on each of them (Feature #11982 says we want to update the basebox at each Debian point-release, not at each Tails release), which is fully negligible compared to the continuous growth of our other storage needs. So IMO “With this new proposal, we’ll have to find a way to remove old cached baseboxes only” was jumping to conclusions a bit too fast, and we shouldn’t spend too much time finding the perfect solution to a problem that’s IMO essentially non-existing.
#14 Updated by anonym 2017-04-05 11:11:01
intrigeri wrote:
> anonym wrote:
> > Building the basebox requires quite a bit of disk space, though, since it will write the full size of the disk image (currently 20 GiB)
>
> I’m surprised a sparse file isn’t used for this.
They might, but it won’t matter as far as I understand it; IIRC once the disk image is created it is completely filled with a file containing only zeros that is then removed — zeroing the free space allows data that was removed during the process to be cleaned up from the disk image, reducing its final size. I think utilizing tricks with sparse files (like dd
’s conv=sparse
) when filling the disk would prevent this for working, resulting in a larger image.
#15 Updated by anonym 2017-04-05 11:31:15
- Status changed from Confirmed to In Progress
- Assignee changed from anonym to bertagaz
- % Done changed from 0 to 50
intrigeri wrote:
> Hi!
>
> anonym:
> > bertagaz wrote:
> >> anonym wrote:
> >> > As a first iteration we can add an option (that we set on Jenkins) that opts-in to download the basebox instead of building it, i.e. we keep the exact, current behavior on Jenkins, but the default for everyone else is to also build the basebox. Then in a second+ iteration we also move Jenkins to building the basebox, after we’ve figured out how we want to solve it.
> >>
> >> Yes, that would be a way to ease the move.
>
> > intrigeri, do you agree? This deviates from the original plan, but at least one has to opt-in for the deviating behavior.
>
> Perhaps I misunderstood something, but I don’t like it much.
Derp, I misunderstood (or rather, misremembered the previous discussion and apparently didn’t bother re-reading what I was responding to). So, Jenkins will build the base boxes just like everyone else.
I’ll deal with:
- make the Rake task
build
some how ensure that the desired base box is installed (personal notes for when I implement this: introducebasebox:import
that runsbasebox:create
+ imports the resulting base box iff it is not already imported; add an option--quiet
tobasebox:create
that suppresses the instructions of how to import base box, and use this option inbasebox:import
).
I believe the only things that remain are on the infra side, so bertagaz:
- make each (libvirt) isobuilder run rake
basebox:clean_old
regularly. - somehow make sure each (libvirt) isobuilder has 20 GiB (+ some margin) of free disk space at the start of each build.
Do we all agree now?
#16 Updated by bertagaz 2017-04-05 12:49:21
- Assignee changed from bertagaz to anonym
- % Done changed from 50 to 0
anonym wrote:
> intrigeri wrote:
> > Perhaps I misunderstood something, but I don’t like it much.
>
> Derp, I misunderstood (or rather, misremembered the previous discussion and apparently didn’t bother re-reading what I was responding to). So, Jenkins will build the base boxes just like everyone else.
>
> I’ll deal with:
>
> * make the Rake task build
some how ensure that the desired base box is installed (personal notes for when I implement this: introduce basebox:import
that runs basebox:create
+ imports the resulting base box iff it is not already imported; add an option --quiet
to basebox:create
that suppresses the instructions of how to import base box, and use this option in basebox:import
).
I can review that and the branch you previously pushed.
> I believe the only things that remain are on the infra side, so bertagaz:
>
> * make each (libvirt) isobuilder run rake basebox:clean_old
regularly.
> * somehow make sure each (libvirt) isobuilder has 20 GiB (+ some margin) of free disk space at the start of each build.
>
> Do we all agree now?
+1
#17 Updated by intrigeri 2017-04-05 13:14:41
>> * make each (libvirt) isobuilder run rake basebox:clean_old
regularly.
Is there any strong reason not to do that before each build?
>> Do we all agree now?
> +1
ACK!
#18 Updated by anonym 2017-04-13 10:51:42
intrigeri wrote:
> >> * make each (libvirt) isobuilder run rake basebox:clean_old
regularly.
>
> Is there any strong reason not to do that before each build?
I guess the non-libvirt/vagrant isobuilders won’t have rake
installed. And if they do for some reason they won’t have vagrant
installed.
#19 Updated by anonym 2017-04-13 10:53:58
- Assignee changed from anonym to bertagaz
- QA Check changed from Info Needed to Dev Needed
bertagaz, the feature/11972-use-vagrant-in-jenkins
fails with:
01:46:24 + git clone /amnesia.git/.git /home/vagrant/amnesia
01:46:24 fatal: repository '/amnesia.git/.git' does not exist
So, on the non-libvirt/vagrant isobuilders, could you please add a symlink from /amnesia.git
to where ever the repo is located?
#20 Updated by intrigeri 2017-04-16 12:22:14
Now that we’re done discussing this, can one of you two please update/create sibling tickets accordingly, so that Redmine encodes the current plan?
#21 Updated by anonym 2017-04-16 18:36:58
anonym wrote:
> I’ll deal with:
>
> * make the Rake task build
some how ensure that the desired base box is installed
Now implemented in the branch!
#22 Updated by anonym 2017-04-18 14:40:23
anonym wrote:
> anonym wrote:
> > I’ll deal with:
> >
> > * make the Rake task build
some how ensure that the desired base box is installed
>
> Now implemented in the branch!
Hm, now the branch is of course broken on Jenkins (vmdebootstrap is not installed so maybe pushing to it wasn’t the smartest. OTOH, while it was super useful to have this branch running during the sprint, I guess it won’t be so useful until it actually is able to build the base box as well, since that is the next step. Any way, bertagaz, if you dislike what I did, let me know and I revert it and extract these bits to another branch.
#23 Updated by bertagaz 2017-04-19 08:39:08
- Assignee changed from bertagaz to anonym
anonym wrote:
> Hm, now the branch is of course broken on Jenkins (vmdebootstrap is not installed so maybe pushing to it wasn’t the smartest. OTOH, while it was super useful to have this branch running during the sprint, I guess it won’t be so useful until it actually is able to build the base box as well, since that is the next step. Any way, bertagaz, if you dislike what I did, let me know and I revert it and extract these bits to another branch.
Nop, that’s fine, that way we have a preview in Jenkins. I’ve installed vmdebootstrap
in isobuilder1, as well as mbr
which was missing. They should be added in the requirements to the documentation about how to build Tails btw.
Now there’s some weird error at the install-mbr stage, see https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/304/console
#24 Updated by bertagaz 2017-04-19 09:01:27
bertagaz wrote:
> Now there’s some weird error at the install-mbr stage, see https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/304/console
I can reproduce that locally btw.
#25 Updated by anonym 2017-04-19 16:55:52
- Assignee changed from anonym to bertagaz
- QA Check changed from Dev Needed to Info Needed
bertagaz wrote:
> Now there’s some weird error at the install-mbr stage, see https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/304/console
This rang a bell for me, and, indeed, I apparently fixed this locally for me and then forgot about it:
--- /usr/lib/python2.7/dist-packages/vmdebootstrap/extlinux.py.orig 2016-09-11 17:07:37.000000000 +0200
+++ /usr/lib/python2.7/dist-packages/vmdebootstrap/extlinux.py 2016-12-06 12:42:24.567391084 +0100
@@ -107,7 +107,7 @@
return
if os.path.exists("/sbin/install-mbr"):
self.message('Installing MBR')
- runcmd(['install-mbr', self.settings['image']])
+ runcmd(['install-mbr', '--force', self.settings['image']])
else:
msg = "mbr enabled but /sbin/install-mbr not found" \
" - please install the mbr package."
When calling vmdebootstrap
we use both --mbr
and --grub
, but I suspect that combination is invalid, and we should only use --grub
. Do you mind testing?
#26 Updated by intrigeri 2017-04-20 07:16:23
- Target version changed from Tails_2.12 to Tails_3.0~rc1
(This blocks stuff we want to complete this early in the 3.0 cycle, so it doesn’t get in the way of 3.0~rc1 and 3.0 final.)
#27 Updated by bertagaz 2017-04-20 17:24:40
anonym wrote:
> This rang a bell for me, and, indeed, I apparently fixed this locally for me and then forgot about it:
> When calling vmdebootstrap
we use both --mbr
and --grub
, but I suspect that combination is invalid, and we should only use --grub
. Do you mind testing?
Confirm it works by removing --mrb
to vmdebootstrap call, as showned in https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/309.
I’m having a look at the code (I see you sneaked in some additional new features like rake test
), and running it locally too.
> So, on the non-libvirt/vagrant isobuilders, could you please add a symlink from /amnesia.git to where ever the repo is located?
That’s not really possible actually, given there’s no permanent clone of the repo on the isobuilders. They are flushed after each build, and cloned in a directory which name depends on the job name. But I don’t think it needs a fix anyway, our isobuilders will be switched to the vagrant build system when this branch will come in production.
#28 Updated by anonym 2017-04-20 17:46:47
bertagaz wrote:
> anonym wrote:
> > This rang a bell for me, and, indeed, I apparently fixed this locally for me and then forgot about it:
> > When calling vmdebootstrap
we use both --mbr
and --grub
, but I suspect that combination is invalid, and we should only use --grub
. Do you mind testing?
>
> Confirm it works by removing --mrb
to vmdebootstrap call, as showned in https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/309.
Wow, it worked! :)
Btw, I saw this:
15:26:37 sudo: lsof: command not found
which is non-fatal. I pushed a commit adding it, so this is fixed when we generate the next base box.
> I’m having a look at the code (I see you sneaked in some additional new features like rake test
), and running it locally too.
I implemented that during the sprint. Remember my pull request against puppet-tails
? That was to add support so vagrant/libvirt isobuilders would use rake test
instead of building the ./run_test_suite ...
command line themselves.
> > So, on the non-libvirt/vagrant isobuilders, could you please add a symlink from /amnesia.git to where ever the repo is located?
>
> That’s not really possible actually, given there’s no permanent clone of the repo on the isobuilders. They are flushed after each build, and cloned in a directory which name depends on the job name. But I don’t think it needs a fix anyway, our isobuilders will be switched to the vagrant build system when this branch will come in production.
Got it! That was only part of the plan to merge this branch early, even before all isobuilders migrate. Let’s for get about it.
#29 Updated by bertagaz 2017-05-02 14:52:13
- Assignee changed from bertagaz to anonym
anonym wrote:
> Btw, I saw this:
> […]
> which is non-fatal. I pushed a commit adding it, so this is fixed when we generate the next base box.
>
> > I’m having a look at the code (I see you sneaked in some additional new features like rake test
), and running it locally too.
I updated the dependency requirements in our build documentation (+lsof,+vmdebootstrap,+mbr). That was the only remainings I see. Code has been read and tested.
> I implemented that during the sprint. Remember my pull request against puppet-tails
? That was to add support so vagrant/libvirt isobuilders would use rake test
instead of building the ./run_test_suite ...
command line themselves.
But I did not test that part yet, I think I’ll merge that and will open a ticket for review and deploy this in Jenkins. Btw, where can I find this pull request on puppet-tails?
> > > So, on the non-libvirt/vagrant isobuilders, could you please add a symlink from /amnesia.git to where ever the repo is located?
> >
> > That’s not really possible actually, given there’s no permanent clone of the repo on the isobuilders. They are flushed after each build, and cloned in a directory which name depends on the job name. But I don’t think it needs a fix anyway, our isobuilders will be switched to the vagrant build system when this branch will come in production.
>
> Got it! That was only part of the plan to merge this branch early, even before all isobuilders migrate. Let’s for get about it.
To me this ticket looks ready now to be merged. Only remaining I see to start deploying it in Jenkins is Feature #11972#note-38 and Feature #11981#note-5, which are waiting for anonym.
#30 Updated by bertagaz 2017-05-02 17:50:58
- Assignee changed from anonym to bertagaz
- QA Check changed from Info Needed to Dev Needed
bertagaz wrote:
> Btw, where can I find this pull request on puppet-tails?
Found it!
#31 Updated by bertagaz 2017-05-03 12:16:32
- % Done changed from 0 to 100
- QA Check changed from Dev Needed to Pass
bertagaz wrote:
> But I did not test that part yet, I think I’ll merge that and will open a ticket for review and deploy this in Jenkins. Btw, where can I find this pull request on puppet-tails?
Created Feature #12503.
> To me this ticket looks ready now to be merged. Only remaining I see to start deploying it in Jenkins is Feature #11972#note-38 and Feature #11981#note-5, which are waiting for anonym.
#32 Updated by anonym 2017-05-04 10:55:20
bertagaz wrote:
> I updated the dependency requirements in our build documentation (+lsof,+vmdebootstrap,+mbr). That was the only remainings I see. Code has been read and tested.
In the end we don’t need mbr
, right? Also, I can’t see lsof
in the puppet manifest.
> To me this ticket looks ready now to be merged. Only remaining I see to start deploying it in Jenkins is Feature #11972#note-38 and Feature #11981#note-5, which are waiting for anonym.
Look at these commits:
971271137d Vagrant: Remove APT pining for syslinux-utils, fix the one for live-build.
da45e19a08 Bump Debian APT snapshots serial, previous one vanished.
d7e220c393 Vagrant: also use snapshots for Tails custom APT repos.
For simplicity I’ll refer to them as:
a1 Vagrant: Remove APT pining for syslinux-utils, fix the one for live-build.
b Bump Debian APT snapshots serial, previous one vanished.
a0 Vagrant: also use snapshots for Tails custom APT repos.
Note that a1
is a fixup on a0
. The problem here is the when we bump the snapshot in between with b
, a1
will not be part of the base box for users that tried building at or after b
but before a1
. So we’ll get two different possibilities for how the base box version set in b
actually will turn out. This is fine for development branches that specifically modifies the base box, but in any other branch (and on base branches in particular) there should be no changes in vagrant/definitions
after the most recent base box version bump in vagrant/Vagrantfile
.
I’m tempted to just trust us to remember this, but if any one could come up with a sanity check that isn’t awkward for either users nor developers I’d appreciate it. Any way, I don’t think we should block on this.
#33 Updated by bertagaz 2017-05-07 07:31:26
anonym wrote:
> In the end we don’t need mbr
, right? Also, I can’t see lsof
in the puppet manifest.
Right, seems to work without mbr, removed. Iirc lsof is needed inside the vagrant VM, but not on the host, but maybe I’m wrong.
> Feature #11981#note-5 is fixed, and Feature #11972#note-38 looks good, but when looking at the latter I realized something:
\o/
> Note that a1
is a fixup on a0
. The problem here is the when we bump the snapshot in between with b
, a1
will not be part of the base box for users that tried building at or after b
but before a1
. So we’ll get two different possibilities for how the base box version set in b
actually will turn out. This is fine for development branches that specifically modifies the base box, but in any other branch (and on base branches in particular) there should be no changes in vagrant/definitions
after the most recent base box version bump in vagrant/Vagrantfile
.
>
> I’m tempted to just trust us to remember this, but if any one could come up with a sanity check that isn’t awkward for either users nor developers I’d appreciate it. Any way, I don’t think we should block on this.
Ooooh, good catch. That’s probably better if we document that, so let’s consider writing this somewhere is part of Feature #11982.
#34 Updated by anonym 2017-05-08 11:19:07
bertagaz wrote:
> Iirc lsof is needed inside the vagrant VM, but not on the host, but maybe I’m wrong.
Of course! Nevermind!
> > Note that a1
is a fixup on a0
. The problem here is the when we bump the snapshot in between with b
, a1
will not be part of the base box for users that tried building at or after b
but before a1
. So we’ll get two different possibilities for how the base box version set in b
actually will turn out. This is fine for development branches that specifically modifies the base box, but in any other branch (and on base branches in particular) there should be no changes in vagrant/definitions
after the most recent base box version bump in vagrant/Vagrantfile
.
> >
> > I’m tempted to just trust us to remember this, but if any one could come up with a sanity check that isn’t awkward for either users nor developers I’d appreciate it. Any way, I don’t think we should block on this.
>
> Ooooh, good catch. That’s probably better if we document that, so let’s consider writing this somewhere is part of Feature #11982.
I’m not super-convinced documenting this will be effective enough. A a sanity check seems more appropriate, but I wouldn’t want the build to fail then since that would be annoying for users (and us, with users complaining, probably).
Hm. Actually I think we are stuck in an old mode of thinking with config.vm.box
(in vagrant/Vagrantfile
) being statically set by us when we feel a new release is in order. Instead, let’s make this versioning automatic by basing it on the last commit in the vagrant/definitions/tails-builder
directory:
--- a/vagrant/Vagrantfile
+++ b/vagrant/Vagrantfile
@@ -21,1 +21,1 @@ require_relative 'lib/tails_build_settings'
- config.vm.box = 'tails-builder-amd64-jessie-20170105'
+ config.vm.box = 'tails-builder-amd64-jessie-${commit_date}-${commit_shortid}'
What do you think?
#35 Updated by bertagaz 2017-05-09 09:10:43
- Assignee changed from bertagaz to anonym
- QA Check changed from Pass to Info Needed
anonym wrote:
> I’m not super-convinced documenting this will be effective enough. A a sanity check seems more appropriate, but I wouldn’t want the build to fail then since that would be annoying for users (and us, with users complaining, probably).
Yes, that’s probably too much of a big hammer.
> Hm. Actually I think we are stuck in an old mode of thinking with config.vm.box
(in vagrant/Vagrantfile
) being statically set by us when we feel a new release is in order. Instead, let’s make this versioning automatic by basing it on the last commit in the vagrant/definitions/tails-builder
directory:
> […]
> What do you think?
I think I like the idea. I don’t see a lot of drawbacks to it, apart for our users: they will end up having a new basebox in their storage each time there’s a new commit (and for every branch they build), which may quickly fill their disk space. So maybe if we adopt this idea, we should adapt the basebox:clean_old
task so that it removes automatically baseboxes that have a different commit encoded in their name?
Also if we want some kind of stability of the build VM (which was what we wanted for ease of Tails reproducibility), maybe we need to keep the encoding of the APT snapshot used in the VM. So maybe we should rather use something like tails-builder-amd64-jessie-2017010501-${commit_date}-${commit_shortid}
. Also I wonder why we’d need ${commit_date}
after all?
I think I’ll merge this branch as is anyway, I don’t think the problem you raised is a blocker, and I’ve re-installed all our isobuilders to vagrant based building, so they won’t build anything until we merge it.
#36 Updated by anonym 2017-05-09 12:14:57
- Assignee changed from anonym to bertagaz
bertagaz wrote:
> anonym wrote:
> > Hm. Actually I think we are stuck in an old mode of thinking with config.vm.box
(in vagrant/Vagrantfile
) being statically set by us when we feel a new release is in order. Instead, let’s make this versioning automatic by basing it on the last commit in the vagrant/definitions/tails-builder
directory:
> > […]
> > What do you think?
>
> I think I like the idea. I don’t see a lot of drawbacks to it, apart for our users: they will end up having a new basebox in their storage each time there’s a new commit (and for every branch they build), which may quickly fill their disk space.
You misunderstood: we only look at the last commit in the vagrant/definitions/tails-builder
directory; I am not referring to the current Git HEAD! That directory changes very rarely (so you can forget your worries about disk space), and presumably if it does change there are good reasons to push a new base box with these changes.
> So maybe if we adopt this idea, we should adapt the basebox:clean_old
task so that it removes automatically baseboxes that have a different commit encoded in their name?
I guess this is irrelevant now (?).
> Also if we want some kind of stability of the build VM (which was what we wanted for ease of Tails reproducibility), maybe we need to keep the encoding of the APT snapshot used in the VM. So maybe we should rather use something like tails-builder-amd64-jessie-2017010501-${commit_date}-${commit_shortid}
. Also I wonder why we’d need ${commit_date}
after all?
Good catch! Indeed, we need to fix the serial some how to make this stable, and also to make it so that a new base box will be built when the serials are bumped. Furthermore, I think we need to track the different serials for the builders than for Tails itself (i.e. let’s not look in config/APT_snapshots.d/
) since we mid-release might want to release a new builder that requires a bumped serial; for instance, in the middle of a release cycle we might want to upgrade live-build
, so we want to bump the serial for the tails
repo. I mean, for Tails we don’t even track a serial for that repo, and if we’d use e.g. the debian
serial we’re in trouble because we don’t want to bump serials for Tails mid release!
So I propose that we track the builder’s serials in vagrant/definitions/tails-builder/config/APT_snapshots.d
(so it is still compatible with auto/scripts/apt-snapshots-serials
if your working dir is vagrant/definitions/tails-builder
) and use this naming scheme: tails-builder-amd64-jessie-${commit_date}-${commit_shortid}
, where:
commit_shortid
refers to the last commit in thevagrant/definitions/tails-builder
directory.
commit_date
refers to date ofcommit_shortid
. You are correct that we technically don’t need this but having that piece of human-readable information is pretty nice to quickly get a rough understanding where all base boxes come from. Also, with the date encoded in the filename the current implementation ofrake basebox:clean*
still work, and it’s worth noting that the alternative implementations come with a price: mtime is unreliable; looking up the commit date from the commit id every time is a bit awkward.
This approach:
* solves the problem I identified in Feature #12409#note-32
* makes it very easy for us to release new base boxes (commit something in the vagrant/definitions/tails-builder
dir, done)
* makes bumping builder snapshots as easy as:
cd vagrant/definitions/tails-builder
../../../auto/scripts/apt-snapshots-serials freeze
git commit config/APT_snapshots.d
(and due to the commit => a new base box is automatically released)
I quite like it!
Note that tracking the last commit to a directory is, more or less, like a Poor Man’s implementation of a Git submodule. We could move vagrant/definitions/tails-builder
into its own Git submodule, and that would give the equivalent properties. We’d just have to ensure (in the Rakefile
) that the submodule checkout is up-to-date before building to avoid issues with out-dated submodule checkouts when switching between branches. Do we want this?
> I think I’ll merge this branch as is anyway, I don’t think the problem you raised is a blocker, and I’ve re-installed all our isobuilders to vagrant based building, so they won’t build anything until we merge it.
With my proposed solution, the transition will be completely seamless (including base box clean up).
#37 Updated by bertagaz 2017-05-09 13:06:36
- Assignee changed from bertagaz to anonym
anonym wrote:
> You misunderstood: we only look at the last commit in the vagrant/definitions/tails-builder
directory; I am not referring to the current Git HEAD! That directory changes very rarely (so you can forget your worries about disk space), and presumably if it does change there are good reasons to push a new base box with these changes.
Ah, yes I missed that detail.
> > So maybe if we adopt this idea, we should adapt the basebox:clean_old
task so that it removes automatically baseboxes that have a different commit encoded in their name?
>
> I guess this is irrelevant now (?).
Yes, your proposal now makes much more sense to me.
> I quite like it!
I do too! I would have previously think that using the another Apt_snapshot directory would maybe have been too much overhead for what we want to achieve, but in the end it has great advantages, and it’s just reusing code we actually use anyway.
> Note that tracking the last commit to a directory is, more or less, like a Poor Man’s implementation of a Git submodule. We could move vagrant/definitions/tails-builder
into its own Git submodule, and that would give the equivalent properties. We’d just have to ensure (in the Rakefile
) that the submodule checkout is up-to-date before building to avoid issues with out-dated submodule checkouts when switching between branches. Do we want this?
I think I like the idea that our build scripts are hosted on the same repo as the tails code, at least so that we don’t add network troubles into the loop to build. But OTOH we already require submodules to build anyway. Not sure what advantages it would bring compared to your actual proposal.
> > I think I’ll merge this branch as is anyway, I don’t think the problem you raised is a blocker, and I’ve re-installed all our isobuilders to vagrant based building, so they won’t build anything until we merge it.
>
> With my proposed solution, the transition will be completely seamless (including base box clean up).
That’s part of why I like it! :)
Do you want to implement that and me to review it?
#38 Updated by intrigeri 2017-05-10 11:51:35
> bertagaz wrote:
>> anonym wrote:
>> So maybe we should rather use something like
>> tails-builder-amd64-jessie-2017010501-${commit_date}-${commit_shortid}
.
Let’s not assume that we use the same serial for all repositories: they’re managed fully independently from each other. Thankfully what anonym proposed (encoding the Git commit ID, and encoding in Git the snapshots being used) avoids this problem :)
> So I propose that we track the builder’s serials in
> vagrant/definitions/tails-builder/config/APT_snapshots.d
(so it is still compatible
> with auto/scripts/apt-snapshots-serials
if your working dir is
> vagrant/definitions/tails-builder
) and use this naming scheme:
> tails-builder-amd64-jessie-${commit_date}-${commit_shortid}
, where:
Looks good to me.
> * makes bumping builder snapshots as easy as:
> cd vagrant/definitions/tails-builder
> ../../../auto/scripts/apt-snapshots-serials freeze
> git commit config/APT_snapshots.d
>
(and due to the commit => a new base box is automatically released)
… and then bump the Valid-Until
field for these snapshots, no? Once the “bump builder snapshots” process is documented, this should be mentioned in there (I’ve seen bertagaz having to bump serials due to previously using one whose lifetime had not been extended).
> I quite like it!
+1
> Note that tracking the last commit to a directory is, more or less, like a Poor Man’s implementation of a Git submodule. We could move vagrant/definitions/tails-builder
into its own Git submodule, and that would give the equivalent properties. We’d just have to ensure (in the Rakefile
) that the submodule checkout is up-to-date before building to avoid issues with out-dated submodule checkouts when switching between branches. Do we want this?
Any advantage, apart of having an implementation that uses appropriate higher-level concepts? I’m not sure it’s worth the hassle. I propose we don’t do it, get the simpler solution working, and iterate later if we find it problematic in practice.
#39 Updated by intrigeri 2017-05-10 11:55:01
- Assignee changed from anonym to bertagaz
So apparently, next step is to implement the chosen design. Yeah! This means that this research/discussion ticket is done, if I got it right, so let’s try to close it ⇒ please:
- file a ticket for anonym to implement the build box APT snapshots serial tracking
- check if there’s any other needed action identified on this ticket, and file tickets as needed
- close this ticket as resolved :)
Thanks!
#40 Updated by anonym 2017-05-10 17:27:22
- QA Check changed from Info Needed to Ready for QA
- Feature Branch set to feature/12409-improved-vagrant-base-box-versioning
intrigeri wrote:
> So apparently, next step is to implement the chosen design. Yeah! This means that this research/discussion ticket is done, if I got it right, so let’s try to close it ⇒ please:
>
> # file a ticket for anonym to implement the build box APT snapshots serial tracking
I’ve already finished it (see the feature branch), so I don’t think this overhead is necessary.
> # check if there’s any other needed action identified on this ticket, and file tickets as needed
I think the only other missing bit was the update for the release process, but the feature branch fixes that.
Perhaps bert has something more?
#41 Updated by bertagaz 2017-05-12 09:19:06
- Status changed from In Progress to Resolved
- Assignee deleted (
bertagaz) - QA Check changed from Ready for QA to Pass
anonym wrote:
> I’ve already finished it (see the feature branch), so I don’t think this overhead is necessary.
Agreed. I just merged this branch, congrats.
> I think the only other missing bit was the update for the release process, but the feature branch fixes that.
>
> Perhaps bert has something more?
Nop, don’t think so.
#42 Updated by bertagaz 2017-05-12 09:39:17
- Status changed from Resolved to In Progress
- Assignee set to anonym
- QA Check changed from Pass to Info Needed
I didn’t check btw, did you bump the APT snapshots Valid-until?
#43 Updated by bertagaz 2017-05-12 10:28:28
- Status changed from In Progress to Fix committed
Applied in changeset commit:2f36e35bee8038f24bcdd70706d81342d5d0a6cd.
#44 Updated by bertagaz 2017-05-12 10:44:28
- Status changed from Fix committed to In Progress
Silly automatic ticket update…
#45 Updated by anonym 2017-05-17 11:13:40
- Status changed from In Progress to Fix committed
- Assignee deleted (
anonym) - QA Check changed from Info Needed to Pass
bertagaz wrote:
> I didn’t check btw, did you bump the APT snapshots Valid-until?
Yes, the debian
, debian-security
and tails
archives for 2017051002 are all bumped so they expire on July 1st.
#46 Updated by intrigeri 2017-05-17 12:16:00
- Status changed from Fix committed to In Progress
- Assignee set to bertagaz
Please update the design doc accordingly to what was decided here, or create another ticket to track this.
#47 Updated by intrigeri 2017-05-19 10:17:55
- Target version changed from Tails_3.0~rc1 to Tails_3.0
Too late for 3.0~rc1 I guess, let’s postpone.
#48 Updated by bertagaz 2017-05-24 14:40:55
- Assignee changed from bertagaz to intrigeri
- QA Check changed from Pass to Ready for QA
intrigeri wrote:
> Please update the design doc accordingly to what was decided here, or create another ticket to track this.
Done in commit:fccb10dd713b647d7b848646159b217ee3fefe9c I believe, please have a look.
#49 Updated by intrigeri 2017-05-24 16:28:09
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Dev Needed
> by storing the serials for the various APT repositories in a directory inside the vagrant one
Maybe just point to the relevant directory instead of vaguely describing where it is?
> changes in the build process
s/build process/Vagrant build system/, perhaps?
> We update the basebox APT snapshots serials at every Debian point release
This doesn’t match the changes that were applied in commit:5bc5dbb08f841635e756c401657ab8fcd0394a56.
> a long `Valid-Until` field, set to something like 6 months
Same as above, this seems to be obsolete.
> The ISO build aborts if the branch being built is not the same as the one for which the VM has been created initially.
I don’t think this is up-to-date as we reuse baseboxes across branches.
And in general:
- Please point to the relevant code for the most tricky bits, as we usually do in our design doc.
- Please format directory/file names appropriately, like this:
`file`
.
#50 Updated by intrigeri 2017-05-27 08:47:20
- Target version changed from Tails_3.0 to Tails_3.1
Please focus on actual breakage during this cycle.
#51 Updated by bertagaz 2017-05-29 15:30:18
- Assignee changed from bertagaz to intrigeri
- QA Check changed from Dev Needed to Ready for QA
Pushed commit:e80b299ff8b23d5ee890ededd76b018d0d039b56 that should fix your previous note remarks. How does it feel now?
#52 Updated by intrigeri 2017-05-30 08:21:03
- Status changed from In Progress to Resolved
- Assignee deleted (
intrigeri) - Target version changed from Tails_3.1 to Tails_3.0
- QA Check changed from Ready for QA to Pass
Pushed lots of smallish fixes on top. I could not find any ticket about turning this blueprint into design doc, so please create one: as written elsewhere recently, given the highly diverse match/mismatch level between our blueprints and actual implementation, they can’t be relied upon to accurately describe the current state of things, so whatever doc we have that does should live somewhere else. I would suggest working on that in September, when the dust has settled and we’re happy with our implementation; for now a blueprint is good! :)
#53 Updated by bertagaz 2017-05-30 11:20:52
intrigeri wrote:
> I could not find any ticket about turning this blueprint into design doc, so please create one
Ack, created Feature #12616