Feature #11979

Move Vagrant's apt-cacher-ng data to a dedicated disk

Added by intrigeri 2016-11-21 14:56:10 . Updated 2017-05-23 09:04:29 .

Status:
Resolved
Priority:
Elevated
Assignee:
Category:
Build system
Target version:
Start date:
2016-11-21
Due date:
% Done:

100%

Feature Branch:
wip/feature/11979-additional-disk-for-apt-cacher-ng
Type of work:
Code
Blueprint:

Starter:
Affected tool:
Deliverable for:
289

Description

… so that it can be shared between various Vagrant VMs.


Subtasks


Related issues

Blocks Tails - Feature #11980: Create and provision a new Vagrant VM for every ISO build Resolved 2016-11-21
Blocked by Tails - Feature #11972: Switch our Jenkins ISO build system to vagrant-libvirt Resolved 2016-12-04

History

#1 Updated by intrigeri 2016-11-21 14:56:37

  • blocks Feature #11980: Create and provision a new Vagrant VM for every ISO build added

#2 Updated by bertagaz 2016-11-29 11:36:27

  • Status changed from Confirmed to In Progress

Applied in changeset commit:f498eb97d5bd4ab8db455b2066b9a590c20b57bb.

#3 Updated by bertagaz 2016-11-29 15:09:20

  • Assignee set to anonym
  • Target version set to Tails_2.9.1
  • % Done changed from 0 to 50
  • QA Check set to Ready for QA
  • Feature Branch set to wip/feature/11979-additional-disk-for-apt-cacher-ng

Pushed a branch that works locally and in Jenkins (see the first Jenkins build mentionning vdb and the following build reusing it). Bonus point it migrates previous cache if apt-cacher-ng has already been used in the VM, even if that’s not really part of the use case we want to support.

What do you say about reviewing that anonym?

#4 Updated by bertagaz 2016-12-03 09:08:13

bertagaz wrote:
> Bonus point it migrates previous cache if apt-cacher-ng has already been used in the VM, even if that’s not really part of the use case we want to support.

While working on Feature #11980 I removed this part which was buggy and felt too much optimization. The diff is now pretty short and simple and it’s quite robust (thanks to vagrant supporting the handling of such additional disks).

My previous note points to a job in Jenkins that is running this ticket branch and Feature #11972 branch in Jenkins since then, without problem. Still Feature #11972 has to be reviewed first, as this branch depends on it.

ame review policy than for this other ticket: if happy, please don’t merge it, I’ll take care of it, to get this deployed smoothly in Jenkins.

#5 Updated by bertagaz 2016-12-03 09:08:38

  • blocked by Feature #11972: Switch our Jenkins ISO build system to vagrant-libvirt added

#6 Updated by anonym 2016-12-07 14:43:48

  • Assignee changed from anonym to bertagaz
  • QA Check changed from Ready for QA to Dev Needed

#7 Updated by anonym 2016-12-08 15:18:55

  • % Done changed from 50 to 70
  • QA Check changed from Dev Needed to Info Needed

This branch looks good to me and I’m happy you removed the acng migration code!

I have an idea: from my “spoiler” in Feature #11972#note-14 I think it’d be a great idea to make this disk into tails-builder-common.qcow2, and also store the Git checkout there (what currently is called WORKSPACE — that name is probably made sense in the beginning, but it’s quite wrong these days so feel free to rename it). That way a full git clone is rarely needed, and the builds are speed up as much. Of course, then you shouldn’t mount it on /var/cache/apt-cacher-ng any more. :)

What do you think?

#8 Updated by anonym 2016-12-14 20:11:17

  • Target version changed from Tails_2.9.1 to Tails 2.10

#9 Updated by intrigeri 2017-01-11 07:43:48

  • Target version changed from Tails 2.10 to Tails_2.12

#10 Updated by anonym 2017-03-13 17:43:37

I’ve pushed fixes so that the disk is ignored unless vmproxy is used, which is what we’ll need when deploying this on Jenkins.

#11 Updated by anonym 2017-03-13 17:57:56

anonym wrote:
> This branch looks good to me and I’m happy you removed the acng migration code!
>
> I have an idea: from my “spoiler” in Feature #11972#note-14 I think it’d be a great idea to make this disk into tails-builder-common.qcow2, and also store the Git checkout there (what currently is called WORKSPACE — that name is probably made sense in the beginning, but it’s quite wrong these days so feel free to rename it). That way a full git clone is rarely needed, and the builds are speed up as much. Of course, then you shouldn’t mount it on /var/cache/apt-cacher-ng any more. :)

Let’s skip this; we cannot do the same on Jenkins since it won’t use this disk, so let’s not complicate things.

#12 Updated by anonym 2017-03-13 18:00:34

  • % Done changed from 70 to 80
  • QA Check changed from Info Needed to Ready for QA

#13 Updated by anonym 2017-03-13 18:06:12

At least I am affected by a nasty bug: https://github.com/vagrant-libvirt/vagrant-libvirt/issues/746

I think this will complicate Feature #11980 since we probably will want to do a vagrant destroy when the build finishes. OTOH, we also need to be able to clean up existing old VMs even if vagrant/.vagrant is lost, and vagrant destroy cannot do that, so perhaps we need to use something else any way.

#14 Updated by anonym 2017-03-15 02:35:47

Note that I’ve merged this ticket’s feature branch into wip/feature/11980-per-branch-vagrant-build-vm.

anonym wrote:
> At least I am affected by a nasty bug: https://github.com/vagrant-libvirt/vagrant-libvirt/issues/746
>
> I think this will complicate Feature #11980 since we probably will want to do a vagrant destroy when the build finishes. OTOH, we also need to be able to clean up existing old VMs even if vagrant/.vagrant is lost, and vagrant destroy cannot do that, so perhaps we need to use something else any way.

In the end I I don’t think a fixed vagrant destroy would have been good enough for us, and I implemented something different for Feature #11981. Just to be clear: we are not blocked (and won’t benefit) from this upstream vagrant-libvirt bug being solved.

#15 Updated by bertagaz 2017-03-15 11:54:53

  • Assignee changed from bertagaz to anonym
  • QA Check changed from Ready for QA to Dev Needed

anonym wrote:
> I’ve pushed fixes so that the disk is ignored unless vmproxy is used, which is what we’ll need when deploying this on Jenkins.

Made a review of the code. Seems ready to be merged for me, except:

  • ‘$TAILS_OFFLINE_BUILD’ in Rakefile does not exist.
  • merging the base branch raises conflicts. Looks like it relates to the former remark. Care to fix that? :)

Meanwhile I’ll run it locally. If it goes well, I’ll merge it with your fix from my review.

#16 Updated by intrigeri 2017-03-15 12:50:26

> * ‘$TAILS_OFFLINE_BUILD’ in Rakefile does not exist.

I’ve fixed that on stable, devel, etc. earlier today.

#17 Updated by anonym 2017-03-15 13:05:13

  • Assignee changed from anonym to bertagaz
  • QA Check changed from Dev Needed to Ready for QA

#18 Updated by bertagaz 2017-03-15 15:21:37

  • % Done changed from 80 to 100
  • QA Check changed from Ready for QA to Pass

intrigeri wrote:
> > * ‘$TAILS_OFFLINE_BUILD’ in Rakefile does not exist.
>
> I’ve fixed that on stable, devel, etc. earlier today.

Ok, nothing more raised by testing it. Ready to be merged, congrats! I’ll do that together with Feature #11980.

#19 Updated by intrigeri 2017-04-20 07:15:04

  • Priority changed from Normal to Elevated
  • Target version changed from Tails_2.12 to Tails_3.0~rc1

(The plan is to complete this early in the 3.0 cycle, so it doesn’t get in the way of 3.0~rc1 and 3.0 final.)

#20 Updated by bertagaz 2017-05-09 13:57:57

  • Status changed from In Progress to Fix committed

Applied in changeset commit:79b7f9ca5f09584bcaa4d948bff56ca2d9ffa30a.

#21 Updated by bertagaz 2017-05-10 10:05:10

  • Assignee deleted (bertagaz)

#22 Updated by intrigeri 2017-05-23 09:04:29

  • Status changed from Fix committed to Resolved