Feature #11972

Switch our Jenkins ISO build system to vagrant-libvirt

Added by bertagaz 2016-11-20 14:36:26 . Updated 2017-05-10 12:19:19 .

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Infrastructure
Target version:
Start date:
2016-12-04
Due date:
% Done:

100%

Feature Branch:
wip/11972-use-vagrant-in-jenkins
Type of work:
Sysadmin
Blueprint:

Starter:
Affected tool:
Deliverable for:
289

Description

Being able to have reproducible builds asks to use the same build system everywhere. We should update our isobuilders and manifests to switch the ISO build process on Jenkins to the now canonical way of using vagrant-libvirt.


Subtasks

Feature #12017: Update our Vagrant ISO build basebox wrt. vagrant-libvirt Resolved

100

Feature #12327: Upgrade Lizard's memory again Duplicate

100


Related issues

Related to Tails - Bug #12009: Jenkins ISO builders are highly unreliable Resolved 2016-12-01
Related to Tails - Feature #12503: Use rake to start the test suite in Jenkins In Progress 2017-05-03
Blocks Tails - Feature #11979: Move Vagrant's apt-cacher-ng data to a dedicated disk Resolved 2016-11-21
Blocks Tails - Feature #11980: Create and provision a new Vagrant VM for every ISO build Resolved 2016-11-21

History

#1 Updated by intrigeri 2016-11-20 15:04:26

  • Deliverable for set to 289

#2 Updated by bertagaz 2016-11-21 13:24:18

  • Status changed from Confirmed to In Progress

Applied in changeset commit:70ab51231aca9c88a81e8a3622a8ba531e437c29.

#3 Updated by intrigeri 2016-12-01 12:37:38

  • related to Bug #12009: Jenkins ISO builders are highly unreliable added

#4 Updated by intrigeri 2016-12-01 12:55:15

What’s your ETA on this one? I wonder how much effort we should put into Bug #12009, i.e. whether we can reasonably “wait” for Feature #11972 do be done.

#5 Updated by intrigeri 2016-12-01 13:59:56

Commit 5c41d23d15da324c18e0920a3c9c1c7f467b938e in puppet-tails does weird things with regexps. Even if we’re lucky enough that it does what you mean it to do, I find it hard to understand both the intent, and the effect of the new regexp.

#6 Updated by intrigeri 2016-12-01 14:12:48

  • I don’t understand what’s the deal wrt. tails::builder vs. tails::iso_builder, but I expect that’s temporary and you’ll refactor this before asking for a proper review :)
  • Please s/uses_vagrant/use_vagrant/ for better consistency.

Otherwise, current Puppet code looks OK.

#7 Updated by bertagaz 2016-12-03 09:01:03

  • Assignee changed from bertagaz to intrigeri
  • % Done changed from 0 to 50
  • QA Check set to Ready for QA

intrigeri wrote:
> * I don’t understand what’s the deal wrt. tails::builder vs. tails::iso_builder, but I expect that’s temporary and you’ll refactor this before asking for a proper review :)

My plan is to switch all isobuilders to the t::iso_builder manifest and get rid of this t::builder one, as we discussed once about the bad naming of the later one. That’s a way to transition to a better name. So I’ll put this ticket to RfQA before, and once reviewed, I’ll upgrade all the isobuidlers so they all use t::iso_builder and after that t::builder will disappear. Makes sense?

> * Please s/uses_vagrant/use_vagrant/ for better consistency.

Fixed!

IMO this branch is fine now and RfQA. It builds fine in Jenkins and its merging is blocking Feature #11979 and Feature #11980 for which I also have branches ready.
Anonym agreed to review this two other tickets, shall we put the review of this ticket on his plate together with the others, or do you want to do the review of this ticket yourself to have a closer look about how it runs in Jenkins? Assign it to anonym if you feel it’s more appropriate.

#8 Updated by bertagaz 2016-12-03 09:08:38

  • blocks Feature #11979: Move Vagrant's apt-cacher-ng data to a dedicated disk added

#9 Updated by intrigeri 2016-12-03 10:48:10

  • Target version changed from 2017 to Tails_2.9.1

#10 Updated by intrigeri 2016-12-03 10:49:01

  • blocks Feature #11980: Create and provision a new Vagrant VM for every ISO build added

#11 Updated by intrigeri 2016-12-04 15:24:43

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Ready for QA to Dev Needed
  • Feature Branch set to wip/11972-use-vagrant-in-jenkins

> My plan is to switch all isobuilders to the t::iso_builder manifest and get rid of this t::builder one, as we discussed once about the bad naming of the later one. That’s a way to transition to a better name. So I’ll put this ticket to RfQA before, and once reviewed, I’ll upgrade all the isobuidlers so they all use t::iso_builder and after that t::builder will disappear. Makes sense?

Makes sense. Still, I’d rather review the corresponding Puppet code changes at the same time as I review the rest of the work this ticket is about. Can you please prepare them in a branch, so that everything is ready and you don’t have to choose between pushing unreviewed code to production, or blocking on me, when you’ll be upgrading the ISO builders?

> IMO this branch is fine now and RfQA.

I’ll assume that “this branch” is wip/11972-use-vagrant-in-jenkins. I did a first review:

  • It all makes sense generally.
  • The BUILD_START_FILENAME and BUILD_END_FILENAME stuff only made sense on Jenkins, and is actually obsolete, so I don’t see the point of starting to generate these files in all cases now. Just drop these two lines?
  • Why do we drop ./build-website, that we used to run on Vagrant builds. Does it change the way the build process behaves for current (non-Jenkins) Vagrant users?
  • commit:577f87fa88388208118c5eca622def5b5d05c4db has the same problem that I’ve already reported about commit 5c41d23d15da324c18e0920a3c9c1c7f467b938e in puppet-tails, which was not fixed either.
  • I don’t understand why we need a big hammer like commit:093a92f63ac966865f7aa575069e479388210865; I find it hard to reason on the side effects it may have, so I would like to first check if there might be a nicer and safer solution. I find it extremely surprising that something that we need is missing in the .git directory, but present in the working copy, which is why I have my doubts about whether mounting the entire WC is the right solution to the problem you’ve seen. The commit message doesn’t help me much, since I don’t know what’s “a partial clone of the Tails repo”.
  • Thanks a lot for commit:2aed0cebfcb51b05f7f51cfce1e6adc477df825c :) It’ll save me some time when doing local dev builds (I currently mv the artifacts somewhere else).

#12 Updated by intrigeri 2016-12-04 15:42:39

More questions, after I thought I would look at jenkins-jobs.git too (next time, please point me to all the repos/branches where there’s stuff to review):

  • why do we set defaultcomp manually?
  • why do we set cpus=4 manually? I thought we had a mechanism to autodetect this.
  • is GIt_COMMIT a typo? It feels strange.
  • please quote strings that contain variables in macros/builders.yaml. This allows detecting mistakes such as git reset --hard $UNSET_VARIABLE.

Also:

  • in sign_artifacts, please don’t recycle exit codes for unrelated errors.
  • (OT) in compare_artifacts, is if ! cmp -s ${ARTIFACTS_DIR}/tails-*.iso really needed? I was hoping that diffoscope would do that check itself.

#13 Updated by intrigeri 2016-12-06 10:03:43

Another question: I’ve seen a number of build failures like https://jenkins.tails.boum.org/view/Tails_ISO/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/86/console:

17:36:00 This is the first time that the Tails builder virtual machine is
17:36:00 started. The virtual machine template is about 300 MB to download,
17:36:00 so the process might take some time.
17:36:00 
17:36:00 Please remember to shut the virtual machine down once your work on
17:36:00 Tails is done:
17:36:00 
17:36:00     $ rake vm:halt
17:36:00 
17:36:01 Bringing machine 'default' up with 'libvirt' provider...
17:36:02 Name `tails-builder-amd64-jessie-20160226_default` of domain about to create is already taken. Please try to run
17:36:02 `vagrant up` command again.
17:36:02 rake aborted!

Was this fixed on the branch already?

#14 Updated by anonym 2016-12-08 15:12:04

Code review:

> 093a92f Mount whole Tails repo in vagrant builder VM and copy it.

As I’ve already expressed in Feature #11980#note-12, using rsync instead of git will copy crap (e.g. build artifacts) from the host into the builder. Let’s move back to git! In the commit message you say:

> […] there are no Git ref to the base branch being merged

Just pass --no-local to git clone and all refs will be copied. I guess this will take longer time than rsync now, but I have an idea Feature #11979 that will fix it later (spoiler: move what we now call WORKSPACE (i.e. the Git repo) to the extra disk).

-mv -f tails-* "$ARTIFACTS_DIR"
+if [ "$TAILS_RAM_BUILD" ]; then
+       mv -f tails-* "$WORKSPACE/"
+fi

This feels wrong — when we are not building in RAM, we still build inside BUILD_DIR, so the artifacts must still be copied back to WORKSPACE.

A related issue (not due to this branch!) is that we apparently only run remove_build_dirs before we set up the new BUILD_DIR if we are building from RAM, which feels wrong for the same reason. I guess we’ve been confused and assumed that disk builds are very different from RAM builds, when they in fact are not. I’d appreciate if you’d fix it while you’re at it, but won’t block on this.

> 2aed0ce Respect the ‘ARTIFACTS’ environment variable if set.

Minor nitpicking (not blocking): ENV stores strings, so the quoting and #{} just reduces readability here:

+        "#{user}@#{hostname}:#{artifact}", "#{ENV['ARTIFACTS']}"

Once you’ve fixed all these (except those I’ve explicitly said are not blocking) I’m happy with this branch.

#15 Updated by intrigeri 2016-12-08 15:17:32

anonym: I’m glad that you looked at this branch, and I would love if you could handle the rest of the review’n’merge process: you’re obviously much more knowledgeable than I am in the Vagrant bits :)

#16 Updated by anonym 2016-12-08 16:11:08

bertagaz, also please add /*.timestamp to .gitignore.

intrigeri wrote:
> anonym: I’m glad that you looked at this branch, and I would love if you could handle the rest of the review’n’merge process: you’re obviously much more knowledgeable than I am in the Vagrant bits :)

Sure, as long as you double-check my potentially crazy suggestions. :)

#17 Updated by anonym 2016-12-14 20:11:17

  • Target version changed from Tails_2.9.1 to Tails 2.10

#18 Updated by intrigeri 2017-01-11 07:43:49

  • Target version changed from Tails 2.10 to Tails_2.12

#19 Updated by intrigeri 2017-03-16 12:16:00

I had a quick look at the diff:

  • vagrant/definitions/tails-builder/postinstall.sh uses a non-frozen version of our custom APT repo, which means that we will affect all the baseboxes that we already have generated in the past whenever we update a package in our builder-jessie APT suite. This feels a bit wrong. Any reason for doing it this way?
  • AFAICT this branch will break the build on all isobuilders that haven’t been switched to Vagrant yet, so unless this migration is rushed, regardless of whether we merge it early or late, we’ll have to live with fewer isobuilders. If not too painful, I would prefer avoiding this, i.e. it would be nice if this branch added support for building in Vagrant without breaking what currently works. Now, if the migration can be completed within 2-3 days max, and at a sensible time wrt. our dev/release cycle, don’t bother and go for it.
  • Passing -m 0 to mkfs.ext4 would avoid wasting disk space.
  • Why do we create a partition on /dev/vdb? We could simply format the drive itself, which makes various operations easier.
  • Please mount /var/cache/apt-cacher-ng with relatime.
  • Is /var/log/apt-cacher-ng/main_*.html still correct while we configured a different LogDir?
  • Currently /var/cache/apt-cacher-ng is mounted by hand, which means that no automatic fsck is performed, and developer experience will suffer whenever this FS is corrupted. Perhaps add it to fstab instead? (augtool can be handy)
  • Using journal_checksum would make the ext4 filesystems more robust, which could improve the developer experience.

#20 Updated by intrigeri 2017-03-16 12:24:22

I’ve tried building from this branch with http_proxy=http://10.36.24.33:3142 TAILS_BUILD_OPTIONS="ram extproxy" rake build and it seems that the /amnesia.git working copy is used as if it were a bare repo, or a .git tree, and then the build fails:

==> default: Should be mounting folders
==> default:  /amnesia.git, opts: {:type=>:"9p", :readonly=>true, :guestpath=>"/amnesia.git", :hostpath=>"/home/intrigeri/tails/git", :disabled=>false, :__vagrantfile=>true, :target=>"/amnesia.git", :accessmode=>"passthrough", :mount=>true, :mount_tag=>"86502a772c54ee52df905a151ae3a9d"}
==> default: Starting domain.
==> default: Waiting for domain to get an IP address...
==> default: Waiting for SSH to become available...
==> default: Creating shared folders metadata...
==> default: mounting p9 share in guest
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
fatal: Not a git repository: '/amnesia.git'
Connection to 192.168.121.220 closed.
rake aborted!
VagrantCommandError: 'vagrant ["ssh", "-c", "MKSQUASHFS_OPTIONS='-comp gzip' TAILS_PROXY='http://10.36.24.33:3142' TAILS_PROXY_TYPE='extproxy' TAILS_RAM_BUILD='1' TAILS_GIT_COMMIT='50a8f71e94ac2ec8ee0d2d6ac972fd3a85dec817' TAILS_GIT_REF='wip/11972-use-vagrant-in-jenkins' build-tails"]' command failed: 128
/home/intrigeri/tails/git/Rakefile:71:in `run_vagrant'
/home/intrigeri/tails/git/Rakefile:353:in `block in <top (required)>'
Tasks: TOP => build

#21 Updated by intrigeri 2017-03-16 12:39:14

Also, the rsync -a should perhaps have the --delete option? Otherwise files we’ve deleted in Git may still be in the tree used for building.

#22 Updated by intrigeri 2017-03-16 12:40:24

intrigeri wrote:
> […] and then the build fails:

Note that git status is happy when run in /amnesia.git inside the VM, so I’m not quite sure why fatal: Not a git repository: '/amnesia.git'.

#23 Updated by intrigeri 2017-03-16 17:04:48

Here’s the fix to the fatal: ambiguous argument 'origin/stable': unknown revision or path not in the working tree. error seen e.g. on https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/183/console, that prompted you to switch from git clone to rsync: after cloning, run git config remote.origin.fetch +refs/remotes/origin/*:refs/remotes/origin/* && git fetch. And then you’ll have access to the refs you want :)

#24 Updated by bertagaz 2017-04-06 14:27:27

  • Target version changed from Tails_2.12 to Tails_3.0

#25 Updated by intrigeri 2017-04-06 15:24:25

bertagaz will split this ticket to disentangle the tails.git part (that’s mostly ready) from the Puppet bits.

#26 Updated by intrigeri 2017-04-18 15:15:05

  • Target version changed from Tails_3.0 to Tails_3.0~rc1

(The plan is to complete this early in the 3.0 cycle, and I don’t want to see it interfering in the last few weeks before 3.0 final.)

#27 Updated by bertagaz 2017-04-25 14:28:58

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

I’ve been through all the notes of this ticket to check every issue raised was addressed. I’ve pushed commit:0d11da9cc7fc50bb33db296443351e5a6bee4121 and commit:09ca60ea29a1491f675d5b1ae768980570ac45af which were the last remainings from intrigeri’s review in Feature #11972#note-19.

The only one I’m not sure of is:

intrigeri wrote:
> * vagrant/definitions/tails-builder/postinstall.sh uses a non-frozen version of our custom APT repo, which means that we will affect all the baseboxes that we already have generated in the past whenever we update a package in our builder-jessie APT suite. This feels a bit wrong. Any reason for doing it this way?

No reason apart we did not use a frozen version of our custom APT repo in the build system. Makes sense to use a frozen version in the basebox though. Where can I find an exemple of the URI of a frozen version of our repo, I have troubles finding that?

#28 Updated by anonym 2017-04-25 16:14:10

bertagaz wrote:
> I’ve been through all the notes of this ticket to check every issue raised was addressed. I’ve pushed commit:0d11da9cc7fc50bb33db296443351e5a6bee4121 and commit:09ca60ea29a1491f675d5b1ae768980570ac45af which were the last remainings from intrigeri’s review in Feature #11972#note-19.

I pushed a fixup (commit:045d3294aaea891f81c0a47034300df5465aef8f) on commit:0d11da9. That commit also made me realize that we should try to avoid the need of fsck by unmounting the disk, so I pushed commit:0b3c939bf7e6fbb1503ab87f9d1cea34a2ee50cb to do just that.

> The only one I’m not sure of is:
>
> intrigeri wrote:
> > * vagrant/definitions/tails-builder/postinstall.sh uses a non-frozen version of our custom APT repo, which means that we will affect all the baseboxes that we already have generated in the past whenever we update a package in our builder-jessie APT suite. This feels a bit wrong.

I agree that this feels wrong.

> > Any reason for doing it this way?

From my side it’s just that I didn’t think about it. Thanks for noticing! :)

> No reason apart we did not use a frozen version of our custom APT repo in the build system. Makes sense to use a frozen version in the basebox though. Where can I find an exemple of the URI of a frozen version of our repo, I have troubles finding that?

AFAICT, we only do time-based snapshots of our upstream APT repos, i.e. Debian and Tor Project, not of our own custom APT repo. When we create tagged snapshots (for releases) we do include the packages from our custom APT repo that were used when building. Unfortunately “used” here refers to what’s installed when running live-build, so anything installed before that, including live-build itself, is not included. In other words, we currently have no snapshots that includes most of these packages (incl. live-build).

intrigeri, what about adding our custom APT repo’s builder-jessie suite to the set of repos we do time-based snapshots of? Otherwise I’m not sure how we can solve this.

#29 Updated by intrigeri 2017-04-25 16:58:52

  • Assignee changed from intrigeri to bertagaz

I’m sorry that the pointers in https://tails.boum.org/contribute/APT_repository/time-based_snapshots/ were not enough to highlight this (hints welcome wrt. how this could be improved, on a dedicated ticket please, because I checked and could not find what’s missing), but we do have snapshots of a few dists already: https://time-based.snapshots.deb.tails.boum.org/tails/.

If we need some more dists in there, the files that need changes are in https://git-tails.immerda.ch/puppet-tails/tree/templates/reprepro/snapshots/time_based/tails (transitively used by https://git-tails.immerda.ch/puppet-tails/tree/manifests/reprepro/snapshots/time_based.pp).

Or, give me a dedicated ticket if you can wait a few days and don’t feel comfortable applying the change yourself (it should be relatively straightforward given the existing examples, some understanding of reprepro, and… an up-to-date set of remote backups :)

#30 Updated by anonym 2017-04-25 17:26:26

intrigeri wrote:
> I’m sorry that the pointers in https://tails.boum.org/contribute/APT_repository/time-based_snapshots/ were not enough to highlight this (hints welcome wrt. how this could be improved, on a dedicated ticket please, because I checked and could not find what’s missing), but we do have snapshots of a few dists already: https://time-based.snapshots.deb.tails.boum.org/tails/.

Speaking for myself, I’m not very used to interacting with APT repos over HTTP, and generally when I have tried I just keep on getting 403 Forbidden (especially with our repos). I don’t think our docs can be improved without becoming too verbose, or by listing what we snapshot, which will get outdated.

#31 Updated by intrigeri 2017-04-26 06:36:33

> No reason apart we did not use a frozen version of our custom APT repo in the build system.

Right. FYI that’s because:

  1. we use our custom APT repo for freeze exceptions, so it can’t be frozen;
  2. we control what’s uploaded in there so we can enforce our freeze ourselves (as opposed to what happens with the Debian repos).

> Makes sense to use a frozen version in the basebox though. Where can I find an exemple of the URI of a frozen version of our repo, I have troubles finding that?

I’m interested in improving our doc so you can find such things yourself next time, so may I ask: where exactly did you look (and failed to find)?

#32 Updated by intrigeri 2017-04-26 06:50:15

> Speaking for myself, I’m not very used to interacting with APT repos over HTTP, and generally when I have tried I just keep on getting 403 Forbidden (especially with our repos).

Please report a bug next time you see that, as our APT repos should be publicly browseable (except some reprepro config and database internal bits that are not meant for consumption by APT). Thanks in advance!

#33 Updated by bertagaz 2017-04-27 11:46:43

  • QA Check changed from Info Needed to Dev Needed

anonym wrote:
> I pushed a fixup (045d3294aaea891f81c0a47034300df5465aef8f) on 0d11da9. That commit also made me realize that we should try to avoid the need of fsck by unmounting the disk, so I pushed 0b3c939bf7e6fbb1503ab87f9d1cea34a2ee50cb to do just that.

Nice, that should be pretty robust now.

intrigeri wrote:
> I’m sorry that the pointers in https://tails.boum.org/contribute/APT_repository/time-based_snapshots/ were not enough to highlight this (hints welcome wrt. how this could be improved, on a dedicated ticket please, because I checked and could not find what’s missing), but we do have snapshots of a few dists already: https://time-based.snapshots.deb.tails.boum.org/tails/.
>
> I’m interested in improving our doc so you can find such things yourself next time, so may I ask: where exactly did you look (and failed to find)?

I’ve looked at the related APT pages in the documentation, and browsed the time-based.snapshots.d.t.b.o website and found there was snapshots for the main suites of our APT repo, but did not find one for the builder-jessie suite. From past discussions we had I thought this suite actually had snapshots, but could not find them. So I don’t think that’s a documentation problem in the end, more that I misunderstood you. :)

> If we need some more dists in there, the files that need changes are in https://git-tails.immerda.ch/puppet-tails/tree/templates/reprepro/snapshots/time_based/tails (transitively used by https://git-tails.immerda.ch/puppet-tails/tree/manifests/reprepro/snapshots/time_based.pp).
>
> Or, give me a dedicated ticket if you can wait a few days and don’t feel comfortable applying the change yourself (it should be relatively straightforward given the existing examples, some understanding of reprepro, and… an up-to-date set of remote backups :)

Thanks for the pointers, precisely what I needed!

I have an unpushed commit implementing the necessary changes in the main tails.git repo.

Now I’ll had the builder-jessie time-based snapshots in our Puppet/reprepro. This indeed seems straightforward at the puppet level.

I’m updating my backups, but that should take some time.

I’m not sure though if there are some necessary steps to do after deploying the Puppet code, e.g at the reprepro level by running commands by hand or if it should be enough.

#34 Updated by bertagaz 2017-04-27 13:17:12

bertagaz wrote:
> intrigeri wrote:
> > Or, give me a dedicated ticket if you can wait a few days and don’t feel comfortable applying the change yourself (it should be relatively straightforward given the existing examples, some understanding of reprepro, and… an up-to-date set of remote backups :)
>
> I’m updating my backups, but that should take some time.

Wait, now I realize that we actually don’t backup time-based snapshots, so this step is probably not that important, apart for the reprepro configuration I guess.

#35 Updated by bertagaz 2017-04-28 08:45:48

bertagaz wrote:
> Now I’ll had the builder-jessie time-based snapshots in our Puppet/reprepro. This indeed seems straightforward at the puppet level.

Done, commit:048bc8cf3ea704e8e2768a818b4148ce0ea406f4 in puppet-tails, as well as pushed commit:d7e220c39345c325a404522e33b3c6df29852881 in the main tails.git repo. I had to bump the serial of the basebox, as the previously used Debian snapshot has disappeared. I have not uploaded the related now basebox.

This seems to work, the builder-jessie APT repo snapshot is used in the vagrant VM.

But now the build is broken with “NoMethodError: undefined method `active?' for nil:NilClass” error, as seen in https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/342/console. I’m not sure where that comes from. Will inverstigate.

#36 Updated by anonym 2017-04-28 09:39:52

bertagaz wrote:
> bertagaz wrote:
> > Now I’ll had the builder-jessie time-based snapshots in our Puppet/reprepro. This indeed seems straightforward at the puppet level.
>
> Done, commit:048bc8cf3ea704e8e2768a818b4148ce0ea406f4 in puppet-tails, as well as pushed commit:d7e220c39345c325a404522e33b3c6df29852881 in the main tails.git repo. I had to bump the serial of the basebox, as the previously used Debian snapshot has disappeared. I have not uploaded the related now basebox.
>
> This seems to work, the builder-jessie APT repo snapshot is used in the vagrant VM.

Awesome!

> But now the build is broken with “NoMethodError: undefined method `active?' for nil:NilClass” error, as seen in https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/342/console. I’m not sure where that comes from. Will inverstigate.

That was my fault. I have pushed an (untested) fix.

#37 Updated by bertagaz 2017-04-28 10:59:44

anonym wrote:
> bertagaz wrote:
> > Done, commit:048bc8cf3ea704e8e2768a818b4148ce0ea406f4 in puppet-tails, as well as pushed commit:d7e220c39345c325a404522e33b3c6df29852881 in the main tails.git repo. I had to bump the serial of the basebox, as the previously used Debian snapshot has disappeared. I have not uploaded the related now basebox.
> >
> > This seems to work, the builder-jessie APT repo snapshot is used in the vagrant VM.

By testing I realized I made a mistake in the pining for live-build and syslinux-utils. I’ll fix that and it should be good to be reviewed.

> > But now the build is broken with “NoMethodError: undefined method `active?' for nil:NilClass” error, as seen in https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/342/console. I’m not sure where that comes from. Will inverstigate.
>
> That was my fault. I have pushed an (untested) fix.

Seems to work!

#38 Updated by bertagaz 2017-04-28 12:51:47

  • Assignee changed from bertagaz to anonym
  • QA Check changed from Dev Needed to Ready for QA

bertagaz wrote:
> By testing I realized I made a mistake in the pining for live-build and syslinux-utils. I’ll fix that and it should be good to be reviewed.

Fixed in commit:971271137d7a1ee9e2642e72d15237ff45e94439, works as can be seen in https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/345/console, at 00:09:12.287 (elapsed time). Removed the pining for syslinux-utils which was no more necessary, we don’t ship it anymore in our builder-jessie suite.

Assigning to anonym for a quick review of the last commits about the builder-jessie snapshots. We’ll be good for this ticket then I think.

I’ll go on with the rest on Feature #12409.

#39 Updated by bertagaz 2017-05-03 12:12:54

  • related to Feature #12503: Use rake to start the test suite in Jenkins added

#40 Updated by bertagaz 2017-05-03 13:12:35

  • related to Feature #12505: Switch isobuilders to vagrant-libvirt in Puppet added

#41 Updated by bertagaz 2017-05-03 13:13:22

Added Feature #12505 to track the puppet part of the deployement.

#42 Updated by anonym 2017-05-04 10:42:34

  • Assignee changed from anonym to bertagaz
  • QA Check deleted (Ready for QA)

bertagaz wrote:
> bertagaz wrote:
> > By testing I realized I made a mistake in the pining for live-build and syslinux-utils. I’ll fix that and it should be good to be reviewed.
>
> Fixed in commit:971271137d7a1ee9e2642e72d15237ff45e94439, works as can be seen in https://jenkins.tails.boum.org/job/build_Tails_ISO_vagrant_wip-11972-use-vagrant-in-jenkins/345/console, at 00:09:12.287 (elapsed time). Removed the pining for syslinux-utils which was no more necessary, we don’t ship it anymore in our builder-jessie suite.
>
> Assigning to anonym for a quick review of the last commits about the builder-jessie snapshots. We’ll be good for this ticket then I think.

Looks good! (I noticed a somewhat related issue, but I’ll deal with that on Feature #12409).

#43 Updated by bertagaz 2017-05-07 11:37:31

anonym wrote:
> Looks good! (I noticed a somewhat related issue, but I’ll deal with that on Feature #12409).

Great! As said on Bug #11006, I’m re-installing the isobuilders and will merge the branch once done.

#44 Updated by bertagaz 2017-05-09 13:57:57

  • Status changed from In Progress to Fix committed

Applied in changeset commit:79b7f9ca5f09584bcaa4d948bff56ca2d9ffa30a.

#45 Updated by bertagaz 2017-05-10 10:06:31

  • Assignee deleted (bertagaz)

#46 Updated by intrigeri 2017-05-10 11:38:51

Any reason this is “Fix committed” and not “Resolved”? It seems to me that this was deployed to production already.

#47 Updated by bertagaz 2017-05-10 12:19:19

  • Status changed from Fix committed to Resolved

intrigeri wrote:
> Any reason this is “Fix committed” and not “Resolved”? It seems to me that this was deployed to production already.

Absolutely not, I’ve dumbly followed the usual way of doing, and forgotten we had other standards when talking about infra code. Fixed!

#48 Updated by intrigeri 2017-06-30 19:13:54

  • related to deleted (Feature #12505: Switch isobuilders to vagrant-libvirt in Puppet)