Feature #9597

Set up a way to share artifacts between jobs

Added by bertagaz 2015-06-17 03:08:46 . Updated 2015-10-14 04:36:27 .

Status:
Resolved
Priority:
Elevated
Assignee:
Category:
Continuous Integration
Target version:
Start date:
2015-06-17
Due date:
2015-10-15
% Done:

100%

Feature Branch:
puppet-tails:feature/9597-jenkins-share-artifacts-between-jobs
Type of work:
Code
Blueprint:

Starter:
0
Affected tool:
Deliverable for:
267

Description

When we’ll deploy our testers on Jenkins, we’ll need a way for them to retrieve their corresponding upstream build_Tails_ISO job artifacts over NFS or HTTP.

We could probably write a code that would guess which artifacts to use for the test using the branch name and the commit ID that are part of the artifacts name. But that sound like not so much fun, and an open door to bugs.

OTOH, Jenkins is supposed to be able to store this build <-> artifacts information, as any CI server should. It sounds more relevant to use this option, for many reasons.

The ArtifactDeployer plugin seems like a good candidate but it needs testing, and probably also modifying the vagrant/provision/assets/build-tails script.


Subtasks


History

#1 Updated by bertagaz 2015-06-17 03:09:13

#2 Updated by intrigeri 2015-06-18 03:04:49

  • blocks #8668 added

#3 Updated by intrigeri 2015-06-18 03:09:08

> OTOH, Jenkins is supposed to be able to store this build <-> artifacts information, as any CI server should. It sounds more relevant to use this option, for many reasons.

Agreed.

We’ll see once we’ve made progress on Feature #8667, but if we go for trigger relationships encoded into Jenkins between build and test jobs, then I suspect that Jenkins might give us nice ways to convey artifacts from the one to the other.

Many other people do that anyway, so let’s try to not reinvent the wheel, e.g.:

#4 Updated by bertagaz 2015-06-18 05:45:04

intrigeri wrote:
> > OTOH, Jenkins is supposed to be able to store this build <-> artifacts information, as any CI server should. It sounds more relevant to use this option, for many reasons.
>
> Agreed.
>
> We’ll see once we’ve made progress on Feature #8667, but if we go for trigger relationships encoded into Jenkins between build and test jobs, then I suspect that Jenkins might give us nice ways to convey artifacts from the one to the other.

Yes, but to do so, it has to know where the artifacts are located, which isn’t the case actually as they are moved by our build script in their NFS final location, not by some Jenkins mechanism. So we need a way to register them for their corresponding build in the Jenkins database or let Jenkins move them by itself so it knows where they lay.

The artifactdeployer seems a good candidate but needs testing to see if it does the job. There also is a copy-to-master option in jjb that use the copy-to-slave plugin. I’ll test the first one, as it doesn’t ask to modify anything in our setup.

> Many other people do that anyway, so let’s try to not reinvent the wheel, e.g.:
>
> * Tor’s Jenkins config might be a good source of inspiration too: https://gitweb.torproject.org/project/jenkins/jobs.git/
> * Same for jenkins-debian-glue, that uses separate jobs for building the source package and the binary ones.

Most of the other install I found use the plain archive Jenkins built-in step. It doesn’t seem to satisfy our needs. Will continue to dig.

#5 Updated by bertagaz 2015-06-22 03:56:22

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

So I had further thought about this, and I’m getting more and more convinced we might not want to rely on NFS to share artifacts between jobs, as Jenkins has its own mechanism for that, and using an external one like NFS would add a lot of complexity to our setup.

The main interest for us to use NFS was to be able to serve artifacts through HTTP with our nightly.t.b.o website. We used this nightly.t.b.o website as a mean to publish the artifacts publicly because our Jenkins instance is private at the moment. But we also consider making our Jenkins available publicly in the (hopefully near) future.

With our current setup, the builds occurs in a tmp directory, and our build script move the artifacts on the NFS shares.

But to have Jenkins registering the corresponding artifacts for a given build, we have to first move the artifacts back in the job workspace as it is the place where Jenkins suppose they are, and then use a Jenkins build step to move them to the NFS shares, probably using the ArtifactDeployer plugin. This is one solution that would need some test as stated previously, to ensure it behaves as we think.

But there might be another simpler option:

The usual setup for artifacts in Jenkins is to use the plain archive publisher. The artifacts in this situation are stored on the Jenkins master, in /var/lib/jenkins/jobs/${JOB_NAME}/builds/..., registered in Jenkins and displayed in the build summary page. People can then download them from the Jenkins web interface. Once our Jenkins public, that would be an alternative to our nightly.t.b.o website.

So it occurs to me we could set it up another way around: what if the NFS server was set up on our Jenkins master to share the /var/lib/jenkins/jobs/ directory so that our nightly webserver could serve it?

Then as for the previous option, we would just have to move the artifacts to the job workspace at the end of the build, but in this case only use the plain simple archive publisher to store them on the NFS filesystem on the Jenkins master through Jenkins internal artifacts archiving. We wouldn’t have to share this NFS on the isobuilders or isotesters anymore, and once our Jenkins is made public, we wouldn’t have to use NFS at all. It would also be a bit less hackish and close to Jenkins usual workflow.

In this case, we also wouldn’t have to deal with Bug #8912, as the job directories on the master are deleted when a job is removed.

But we’d have to modify our clean_old_jenkins_artifacts.rb script because the directory structure in /var/lib/jenkins/jobs is a bit different.

Does it make sense? Do you think that’s an interesting switch, or other drawbacks to it? Do you see stuffs in /var/lib/jenkins/jobs/*/builds/ we might not want to share?

#6 Updated by bertagaz 2015-06-22 06:29:39

I’ve setup a new job build_Tails_ISO_feature-9597-jenkins-share-artifacts-between-jobs@ job configured with the archive publisher, and a corresponding branch on tails.git where the build script has been modify accordingly.

Artifacts are getting stored in /var/lib/jenkins/build_Tails_ISO_feature-9597-jenkins-share-artifacts-between-jobs/$date/archive/@

That’s just a way to test it if you want to have a look. The artifact copying end up with an error, because we don’t have enough disk space for that at the moment, but part of them are copied, so you can get the whole picture. Don’t forget to remove this artifacts if you try a build, otherwise our Jenkins master root filesystem will be filled up.

#7 Updated by intrigeri 2015-06-22 10:35:32

  • Assignee changed from intrigeri to bertagaz

> Does it make sense?

To me it does!

> Do you think that’s an interesting switch,

Yes.

> or other drawbacks to it?

We should check if Jenkins keeps an internal index of artifacts it knows about: interfering with its storage with clean_old_jenkins_artifacts.rb might break things.

> Do you see stuffs in /var/lib/jenkins/jobs/*/builds/ we might not want to share?

We can choose exactly what we copy in there with ArtifactDeployer, can’t we? If yes, then I don’t understand what the question is about. Note that I didn’t look at what’s currently in that directory — if I should, just tell me.

#8 Updated by bertagaz 2015-06-22 12:49:36

  • Assignee changed from bertagaz to intrigeri

intrigeri wrote:
>
> We should check if Jenkins keeps an internal index of artifacts it knows about: interfering with its storage with clean_old_jenkins_artifacts.rb might break things.

Right. I think it does, but I’ll test that to be sure.

> We can choose exactly what we copy in there with ArtifactDeployer, can’t we?

Both the ArtifactDeployer plugin and the native Jenkins archive step let us use a regexp to choose what to copy.

>If yes, then I don’t understand what the question is about. Note that I didn’t look at what’s currently in that directory — if I should, just tell me.

There’s a misunderstanding, the plan I propose is to not use the ArtifactDeployer plugin, let me try to clarify my previous leeeengthy note:

Scenario 1 : Keep NFS (original)

  • artifacts are copied back in $WORKSPACE by the build script
  • a ArtifactDeployer step on the slave copy them to the www.lizard NFS and (hopefully) stores in the Jenkins database the build <-> NFS file relationship so that other jobs and people can access them. If not, and depending on how it behaves with external storage, we might have to add more own made scripts to workaround.

Scenario 2: Get rid of NFS

  • artifacts are copied back in $WORKSPACE by the build script
  • the native Jenkins archive step on the slave copy them in the jenkins master’s /var/lib/jenkins/jobs/${JOB_NAME}/builds/${build_ID}/, which is a NFS share so that nightly.t.b.o can serve them (until our Jenkins instance gets public).

Advantage of scenario 1: we use our already written artifact rotation script, no need to modify our infra setup.
Downside of scenario 1: might bring complexity in the future, as it’s not the way most project use to share their artifacts.

Advantages of my proposed scenario 2: Use of the Jenkins’ native way to store and share artifacts. No NFS in the end when our Jenkins instance gets public, as artifacts will be available through its web interface, displayed on their build page. We also then use the plain native Jenkins artifacts archiving feature, which sounds more reliable than adding one more plugin and crossing fingers on its compatibility with others.
Downside of scenario 2: we have to adapt the clean_old_jenkins_artifacts.rb script and deploy our NFS/nightly.t.b.o the other way around (NFS server on jenkins.lizard).

So please have a quick look to what’s under /var/lib/jenkins/jobs/ on the Jenkins master, which would be a NFS share served by nigthly.t.b.o until our Jenkins gets public. There’s only the job and builds configuration, scm/build logs and other files that don’t seem so important to publish.

Sorry to have been that verbose in the previous note, I might not have been keen on the rational presentation. I wish we had discussed that IRL. :)

#9 Updated by bertagaz 2015-06-22 13:36:09

bertagaz wrote:
> intrigeri wrote:
> >
> > We should check if Jenkins keeps an internal index of artifacts it knows about: interfering with its storage with clean_old_jenkins_artifacts.rb might break things.
>
> Right. I think it does, but I’ll test that to be sure.

Tested on build number 5 of the build_Tails_ISO_feature-9597-jenkins-share-artifacts-between-jobs.

After the build failed because of the expected disk space issue when moving the artifact to the master, still some artifacts showed up on the build summary page.

I deleted them in the job folder on the master, and they just disappeared from the build summary page. So cleaning the artifact from outside seems to work.

#10 Updated by bertagaz 2015-06-25 08:30:25

Tested the Artifact-deployer plugin on build #7 of the build_Tails_ISO_feature-9597-jenkins-share-artifacts-between-jobs job.

Artifacts end up in the correct NFS subdirectory as shown by the build log.

It adds a link in the build summary to a webpage listing and linking to the deployed artifacts. There’s no indication of this artifacts in the /api/{jon,xml}/ build url though. They can’t either be downloaded from that webpage.

So it does integrate pretty well with our current nightly setup in term of moving the artifacts to the NFS directory.

Still, it doesn’t help to share artifacts between jobs, as Jenkins cannot share the files.

The NFS artifacts path, in the scenario where we use Artifact-deployer/NFS to share the artifacts, would probably have to computed from a standard way of storing them (e.g /srv/www/${JOB_NAME}/${BUILD_ID}/ rather than /srv/www/${JOB_NAME}/).

So do we prefer to use Jenkins’ internal way of sharing artifacts (copyartifacts plugin, temporary NFS on jenkins.li:/var/lib/jenkins/jobs/ to www.li:/srv/www/ as long as nightly.t.b.o is needed), or the “Artifact-deployer/NFS everywhere” (a bunch of homebrewed logics and code) way? I’m clearly tempted by the first. :)

#11 Updated by intrigeri 2015-06-29 01:57:51

> So please have a quick look to what’s under /var/lib/jenkins/jobs/ on the Jenkins master, which would be a NFS share served by nigthly.t.b.o until our Jenkins gets public. There’s only the job and builds configuration, scm/build logs and other files that don’t seem so important to publish.

I think the (recursive) directory listing is OK to publish. Regarding files content, we could have a whitelist of locations whose content can be served publicly, so we don’t have to care about whether all files’ content is OK to publish.

#12 Updated by intrigeri 2015-06-29 01:58:26

> I deleted them in the job folder on the master, and they just disappeared from the build summary page. So cleaning the artifact from outside seems to work.

Yay :)

#13 Updated by intrigeri 2015-06-29 02:01:23

> Tested the Artifact-deployer plugin […]

> So it does integrate pretty well with our current nightly setup in term of moving the artifacts to the NFS directory. Still, it doesn’t help to share artifacts between jobs, as Jenkins do not have access to the files and the information we need (the NFS artifacts path) would have to be parsed from the build webpage.

OK, so if I got it right, that’s not a solution to the problem at hand: all in all, ArtifactDeployer seems to be cp + a bit of non-machine-readable fancy web bonus information, which is not what we need. So I guess you’ll go on investigating the other option (Jenkins’ own artifacts archiving support + reverse the NFS setup until we can make our Jenkins public).

#14 Updated by bertagaz 2015-07-06 03:05:15

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

intrigeri wrote:
> OK, so if I got it right, that’s not a solution to the problem at hand: all in all, ArtifactDeployer seems to be cp + a bit of non-machine-readable fancy web bonus information, which is not what we need. So I guess you’ll go on investigating the other option (Jenkins’ own artifacts archiving support + reverse the NFS setup until we can make our Jenkins public).

Yes, I’ll try this option, but only once the automated builds are deployed.

#15 Updated by intrigeri 2015-07-19 07:33:52

  • Status changed from Confirmed to In Progress

#16 Updated by intrigeri 2015-08-02 08:01:49

  • Due date set to 2015-10-15

#17 Updated by intrigeri 2015-08-02 12:33:47

I guess that’s the next step to unblock the work on the automated tests in Jenkins, right? If you need info wrt. my availability to review’n’merge stuff, please ask early.

#18 Updated by bertagaz 2015-08-05 05:33:18

  • Target version changed from Tails_1.5 to Tails_1.6

Postponing

#19 Updated by intrigeri 2015-08-26 06:05:35

  • Deliverable for set to 267

#20 Updated by bertagaz 2015-08-27 10:27:56

Ok, this will be my next big move regarding autotests. It’s the main feature that is currently blocking further work on it.

So the step to do that would be:

  1. Shutdown the Jenkins slaves and master services
  2. Shutdown nightly.tails.boum.org
  3. Remove the configuration of the previous (crappy) NFS deployment
  4. Detach /dev/lizard/www-data from www.lizard
  5. Rename it to something more appropriate (/dev/lizard/jenkins-data)
  6. Attach /dev/lizard/jenkins-data to jenkins.lizard
  7. Move /var/lib/jenkins in /dev/lizard/jenkins-data
  8. Move the old autobuilt ISOs in their corresponding build artifact directory
  9. Adapt the clean_old_jenkins_artifacts.rb script to be run as a cronjob on the master
  10. Adapt the current jobs configuration to use the archive step to copy artifact on the Jenkins master
  11. Adapt the generate_build_tails_iso_jobs script accordingly
  12. Adapt the tails.git:vagrant/provision/assets/build-tails script
  13. Setup a NFS server on the jenkins.lizard
  14. Setup a NFS client on www.lizard for nightly.tails.boum.org
  15. Start the stopped services

How fun!

I’ll try to write most of the code in branches ASAP for a first review. That would mostly be the “Adapt” step I think, and the new NFS setup. The old one will probably require several to be removed, so it will be a bit of a burden to do it in branches. Then if you’re ok with them, I’ll deploy it.

#21 Updated by bertagaz 2015-08-27 10:28:34

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

#22 Updated by bertagaz 2015-08-28 06:12:14

bertagaz wrote:
>
> 12. Adapt the tails.git:vagrant/provision/assets/build-tails script

Has been done a while ago in 2144282 and fcf4e65 in tails.git:feature/9597-jenkins-share-artifacts-between-jobs.

It was tested in Jenkins together with a corresponding job for that branch, configured with the archive publisher (see commit 20e5eab in the jenkins-jobs.git repo). The build went fine and the artifacts were moved on the master as planned.

I’m not building with vagrant, so I didn’t test this part, but given how the change is minor for this kind of build, I don’t see why it would break it.

#23 Updated by bertagaz 2015-08-29 08:34:43

  • Feature Branch set to puppet-tails:feature/9597-jenkins-share-artifacts-between-jobs

Both

9. Adapt the clean_old_jenkins_artifacts.rb script to be run as a cronjob on the master

and

11. Adapt the generate_build_tails_iso_jobs script accordingly

have been done in the feature branch, starting on commit 5e82f21.

#24 Updated by bertagaz 2015-08-29 08:34:59

  • % Done changed from 0 to 30

#25 Updated by intrigeri 2015-08-30 10:46:02

> Move /var/lib/jenkins in /dev/lizard/jenkins-data

I’m unsure about this one: there might be advantages to putting only the data we want to share (/var/lib/jenkins/jobs/) on that filesystem.

> Move the old autobuilt ISOs in their corresponding build artifact directory

If it’s not trivial, feel free to skip this one.

> Adapt the current jobs configuration to use the archive step to copy artifact on the Jenkins master
> Adapt the generate_build_tails_iso_jobs script accordingly

Rather in the opposite order, I suspect.

This plan might lacks disabling some cronjobs temporarily, and then re-enabling them.

Otherwise, totally makes sense!

#26 Updated by intrigeri 2015-08-30 10:52:53

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

>> 12. Adapt the tails.git:vagrant/provision/assets/build-tails script

> Has been done a while ago in 2144282 and fcf4e65 in tails.git:feature/9597-jenkins-share-artifacts-between-jobs.

Looks good, just two details:

I don’t understand why you’re adding test -d "$ARTIFACTS_DIR" || install -m 0755 -d "$ARTIFACTS_DIR" to the Vagrant code path, though: we’re creating the directory later already in this script, and I don’t see how Vagrant builds would be impacted by this branch anyway.

Regarding commit:fcf4e65, do you mean that Jenkins creates and manages all those symlinks by itself, in a way that allows us to provide a stable URL to, say, the latest ISO built from feature/jessie? How would that URL look like?

> I’m not building with vagrant, so I didn’t test this part, but given how the change is minor for this kind of build, I don’t see why it would break it.

Agreed, especially once you’ll have reverted the change to the Vagrant code path mentioned above ;)

#27 Updated by intrigeri 2015-08-30 11:26:28

Hi again!

Great job :) I’ve got lots of comments, but they should all be trivial to address, and in most cases I’m suggesting the solution myself.

> 9. Adapt the clean_old_jenkins_artifacts.rb script to be run as a cronjob on the master

Please don’t hardcode /var/lib/jenkins in clean_old_jenkins_artifacts_wrapper, instead pass it as a parameter in the cronjob, to make development and testing easier. (We’ve had the same discussion about a similar situation a few months ago, IIRC — you won’t be surprised, I didn’t change my mind :)

Dir.glob("#{options[:topdir]}/builds/**/archive/*.iso"): do we really need to match directories recursively with **? IOW, don’t we know in advance exactly how many levels of sub-directories we have to go through? If we know that, better replace this with as many */ as needed, I think — this might avoid us nasty surprises some day.

I don’t know what the directorie structure under $jobdir/builds/ is (the info you provided earlier on this ticket doesn’t match the code I am reviewing), so just to be sure: the updated clean_old_jenkins_artifacts won’t leave empty directories (e.g. archive, or perhaps even archive/.. etc.) behind itself, will it?

> 11. Adapt the generate_build_tails_iso_jobs script accordingly

artifacts: 'tails-*' is relative to the workspace, that is to the Git checkout we’re building from, right? If yes, I see two problems:

  1. Anything we add to Git some day, that matches this glob, will be archived by Jenkins. That’s a regression compared to the current state of things.
  2. If our build system some day produces artifacts worth archiving, that are called differently, then we’ll have to adjust both build-tails and the jobs config.

I suggest modifying tails.git:build-tails to move artifacts to ARTIFACTS_DIR="$WORKSPACE/build-artifacts", and modifying generate_build_tails_iso_jobs accordingly to use artifacts: 'build-artifacts/tails-*'. As a bonus, add /build-artifacts/ to .gitignore in tails.git, so that if someone tries someday to use the same name, they have a chance to notice that something is wrong. These presumably trivial (?) changes would solve both of my concerns.

What does the fingerprint configuration setting do in the archive section? I guess it has some impact on performance/resources, so I wonder what are the advantages.

Usually, when I add things like “XXX: Remove me once applied”, I drop all resource parameters except the ensure => absent for clarity’s sake. Just a suggestion, of course these things are not meant to live long :)

#28 Updated by bertagaz 2015-08-31 09:37:05

intrigeri wrote:
>
> I don’t understand why you’re adding test -d "$ARTIFACTS_DIR" || install -m 0755 -d "$ARTIFACTS_DIR" to the Vagrant code path, though: we’re creating the directory later already in this script, and I don’t see how Vagrant builds would be impacted by this branch anyway.

Well, the install -m 0755 -d "$ARTIFACTS_DIR" was in the part of the script shared by Jenkins and Vagrant. It doesn’t make sense in Jenkins (well ok, as long as $ARTIFACT_DIR == $WORKSPACE), so I’ve moved that part in the Vagrant specific bits.

> Regarding commit:fcf4e65, do you mean that Jenkins creates and manages all those symlinks by itself, in a way that allows us to provide a stable URL to, say, the latest ISO built from feature/jessie? How would that URL look like?

Yes it does, Jenkins maintains symlinks that would render an URL like:

http://nightly.tails.boum.org/build_Tails_ISO_devel/builds/lastSuccessful or
http://nightly.tails.boum.org/build_Tails_ISO_devel/builds/lastFailed

as well as

http://nightly.tails.boum.org/build_Tails_ISO_devel/lastSuccessful
http://nightly.tails.boum.org/build_Tails_ISO_devel/lastFailed

Have a quick look in /var/lib/jenkins/jobs/$JOB to have an idea.

#29 Updated by bertagaz 2015-08-31 10:03:15

intrigeri wrote:
>
> Great job :) I’ve got lots of comments, but they should all be trivial to address, and in most cases I’m suggesting the solution myself.

Thanks for the kind words!

> > 9. Adapt the clean_old_jenkins_artifacts.rb script to be run as a cronjob on the master
>
> Please don’t hardcode /var/lib/jenkins in clean_old_jenkins_artifacts_wrapper, instead pass it as a parameter in the cronjob, to make development and testing easier. (We’ve had the same discussion about a similar situation a few months ago, IIRC — you won’t be surprised, I didn’t change my mind :)

Ooch, right, commit 4d9482d.

> Dir.glob("#{options[:topdir]}/builds/**/archive/*.iso"): do we really need to match directories recursively with **? IOW, don’t we know in advance exactly how many levels of sub-directories we have to go through? If we know that, better replace this with as many */ as needed, I think — this might avoid us nasty surprises some day.

Well, the /var/lib/jenkins/$JOB/builds/ directory is full of symlinks, in the form of BUILDID <-> BUILDDATE. The ** part of the regexp does not traverse them and thus doesn’t return twice the same file. I’ve tested that.

> I don’t know what the directorie structure under $jobdir/builds/ is (the info you provided earlier on this ticket doesn’t match the code I am reviewing), so just to be sure: the updated clean_old_jenkins_artifacts won’t leave empty directories (e.g. archive, or perhaps even archive/.. etc.) behind itself, will it?

The info is in jenkins.lizard:/var/lib/jenkins. The archive directory will remain with some tiny files in there, but will be removed by Jenkins with its own artifacts cleanup method, that is a bit too lame for us sadly.

> > 11. Adapt the generate_build_tails_iso_jobs script accordingly
>
> artifacts: 'tails-*' is relative to the workspace, that is to the Git checkout we’re building from, right? If yes, I see two problems:
>
> # Anything we add to Git some day, that matches this glob, will be archived by Jenkins. That’s a regression compared to the current state of things.
> # If our build system some day produces artifacts worth archiving, that are called differently, then we’ll have to adjust both build-tails and the jobs config.
>
> I suggest modifying tails.git:build-tails to move artifacts to ARTIFACTS_DIR="$WORKSPACE/build-artifacts", and modifying generate_build_tails_iso_jobs accordingly to use artifacts: 'build-artifacts/tails-*'. As a bonus, add /build-artifacts/ to .gitignore in tails.git, so that if someone tries someday to use the same name, they have a chance to notice that something is wrong. These presumably trivial (?) changes would solve both of my concerns.

Hmmm well, that’s the way it’s done at the moment. I’ll do it, but please, consider adding such kind of enhancements on my plate in other tickets not related to my deliverables. What I wrote doesn’t bring regressions.

> What does the fingerprint configuration setting do in the archive section? I guess it has some impact on performance/resources, so I wonder what are the advantages.

None, appart that they will be displayed in the Jenkins build webpage. Agree it will take unnecessary CPU time, so removed it. Let’s go green with commit 9f6b937! :D

> Usually, when I add things like “XXX: Remove me once applied”, I drop all resource parameters except the ensure => absent for clarity’s sake. Just a suggestion, of course these things are not meant to live long :)

Ok, will try to remember that.

#30 Updated by bertagaz 2015-08-31 10:03:39

  • % Done changed from 30 to 50

#31 Updated by bertagaz 2015-09-01 02:16:10

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

bertagaz wrote:
> intrigeri wrote:
> >
> > I suggest modifying tails.git:build-tails to move artifacts to ARTIFACTS_DIR="$WORKSPACE/build-artifacts", and modifying generate_build_tails_iso_jobs accordingly to use artifacts: 'build-artifacts/tails-*'. As a bonus, add /build-artifacts/ to .gitignore in tails.git, so that if someone tries someday to use the same name, they have a chance to notice that something is wrong. These presumably trivial (?) changes would solve both of my concerns.
>
> Hmmm well, that’s the way it’s done at the moment. I’ll do it, but please, consider adding such kind of enhancements on my plate in other tickets not related to my deliverables. What I wrote doesn’t bring regressions.

Done in puppet-tails.git:1f74b6d and tails.git:46ca4b6 in their respective feature/9597-jenkins-share-artifacts-between-jobs branch.

I think everything has been reviewed. Shall go on and deploy it?

#32 Updated by intrigeri 2015-09-01 02:19:13

> intrigeri wrote:
>>
>> I don’t understand why you’re adding test -d "$ARTIFACTS_DIR" || install -m 0755 -d "$ARTIFACTS_DIR" to the Vagrant code path, though: we’re creating the directory later already in this script, and I don’t see how Vagrant builds would be impacted by this branch anyway.

> Well, the install -m 0755 -d "$ARTIFACTS_DIR" was in the part of the script shared by Jenkins and Vagrant. It doesn’t make sense in Jenkins (well ok, as long as $ARTIFACT_DIR == $WORKSPACE), so I’ve moved that part in the Vagrant specific bits.

OK, got it. I think I’d prefer not implicitly relying on $ARTIFACT_DIR == $WORKSPACE, and leaving that install in the shared section doesn’t harm.

>> Regarding commit:fcf4e65, do you mean that Jenkins creates and manages all those symlinks by itself, in a way that allows us to provide a stable URL to, say, the latest ISO built from feature/jessie? How would that URL look like?

> Yes it does, Jenkins maintains symlinks that would render an URL like:

> http://nightly.tails.boum.org/build_Tails_ISO_devel/builds/lastSuccessful or
> http://nightly.tails.boum.org/build_Tails_ISO_devel/builds/lastFailed

> as well as

> http://nightly.tails.boum.org/build_Tails_ISO_devel/lastSuccessful
> http://nightly.tails.boum.org/build_Tails_ISO_devel/lastFailed

You seem to suggest that these URLs would point to the latest ISO (since that’s what I asked for). Is it what you meant?

But currently the symlinks I see point to a directory (that possibly will later contain the build artifacts). Assuming I’m guessing right with the limited info I have, how does this buy us a stable URL to the latest ISO image built from one branch, given the filename of said ISO images is always different? (Sorry if I’m a bit insistent here, I just don’t understand how this whole thing will work, and your answer makes me a bit confused.)

#33 Updated by intrigeri 2015-09-01 02:42:25

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

> Ooch, right, commit 4d9482d.

:)

Which triggers a couple more comments:

  • missing quotes around $1
  • I’m not sure what you’re trying to do with TOPDIR=$1 || exit 1, but:
    • if we hadn’t set -u, then the assignment would always succeed, so || exit 1 would never be run; but we’re under set -u so if no parameter is passed, then execution aborts before || exit 1 anyway; so || exit 1 seems useless in any case; this is shell, not Perl :)
    • if we pass an empty parameter, or a parameter that’s not the path of an existing directory, or the path of an unreadable directory, then we have no safeguard;
    • so, I suggest writing something like:
TOPDIR="$1"
[ -n "$TOPDIR" ] || exit 1
[ -d "$TOPDIR" ] || exit 2
[ -r "$TOPDIR" ] || exit 4

>> Dir.glob("#{options[:topdir]}/builds/**/archive/*.iso"): do we really need to match directories recursively with **? IOW, don’t we know in advance exactly how many levels of sub-directories we have to go through? If we know that, better replace this with as many */ as needed, I think — this might avoid us nasty surprises some day.

> Well, the /var/lib/jenkins/$JOB/builds/ directory is full of symlinks, in the form of BUILDID <-> BUILDDATE. The ** part of the regexp does not traverse them and thus doesn’t return twice the same file. I’ve tested that.

OK, thanks (perhaps a comment would be useful, because next time I see that in a year I’ll want to “fix” it).

It’s still a bit unclear to me why we need **, though: are you implicitly suggesting that * would return multiple times the same file in this situation?

>> I don’t know what the directorie structure under $jobdir/builds/ is (the info you provided earlier on this ticket doesn’t match the code I am reviewing), so just to be sure: the updated clean_old_jenkins_artifacts won’t leave empty directories (e.g. archive, or perhaps even archive/.. etc.) behind itself, will it?

> The info is in jenkins.lizard:/var/lib/jenkins. The archive directory will remain with some tiny files in there, but will be removed by Jenkins with its own artifacts cleanup method, that is a bit too lame for us sadly.

OK, thanks, sounds good enough to me. So there must be a plan to turn on Jenkins’ own artifacts cleanup method. I think it should be made clear on Feature #10118. I’m curious how we’ll do that in a way that doesn’t interfere with our own artifacts cleanup (I guess that we’ll need to configure Jenkins’ own cleanup to only trigger for stuff that’s older than anything we want to keep, which sounds acceptable); if there’s anything more to say about it, let’s discuss it on Feature #10118.

>> > 11. Adapt the generate_build_tails_iso_jobs script accordingly
>>
>> artifacts: 'tails-*' is relative to the workspace, that is to the Git checkout we’re building from, right? If yes, I see two problems:
>>
>> # Anything we add to Git some day, that matches this glob, will be archived by Jenkins. That’s a regression compared to the current state of things.
>> # If our build system some day produces artifacts worth archiving, that are called differently, then we’ll have to adjust both build-tails and the jobs config.
>>
>> I suggest modifying tails.git:build-tails to move artifacts to ARTIFACTS_DIR="$WORKSPACE/build-artifacts", and modifying generate_build_tails_iso_jobs accordingly to use artifacts: 'build-artifacts/tails-*'. As a bonus, add /build-artifacts/ to .gitignore in tails.git, so that if someone tries someday to use the same name, they have a chance to notice that something is wrong. These presumably trivial (?) changes would solve both of my concerns.

> Hmmm well, that’s the way it’s done at the moment.

Indeed, I missed that. Sorry!

> I’ll do it, but please, consider adding such kind of enhancements on my plate in other tickets not related to my deliverables.

Right, sorry again. I’ll try to be more careful in the future.

#34 Updated by intrigeri 2015-09-01 03:47:19

> Done in puppet-tails.git:1f74b6d and tails.git:46ca4b6 in their respective feature/9597-jenkins-share-artifacts-between-jobs branch.

Yay, looks good! The test is not needed in test -d "$ARTIFACTS_DIR" || install -m 0755 -d "$ARTIFACTS_DIR", so please drop it.

> I think everything has been reviewed. Shall go on and deploy it?

See the other comments and questions I’ve sent today.

#35 Updated by bertagaz 2015-09-01 04:09:26

intrigeri wrote:
> > Ooch, right, commit 4d9482d.
>
> Which triggers a couple more comments:
>
> * missing quotes around $1
> * I’m not sure what you’re trying to do with TOPDIR=$1 || exit 1, but:
> if we hadn’t set -u, then the assignment would always succeed, so || exit 1 would never be run; but we’re under set -u so if no parameter is passed, then execution aborts before || exit 1 anyway; so || exit 1 seems useless in any case; this is shell, not Perl :)
> if we pass an empty parameter, or a parameter that’s not the path of an existing directory, or the path of an unreadable directory, then we have no safeguard;
> so, I suggest writing something like:

commit b1c6c49.

> >> Dir.glob("#{options[:topdir]}/builds/**/archive/*.iso"): do we really need to match directories recursively with **? IOW, don’t we know in advance exactly how many levels of sub-directories we have to go through? If we know that, better replace this with as many */ as needed, I think — this might avoid us nasty surprises some day.
>
> > Well, the /var/lib/jenkins/$JOB/builds/ directory is full of symlinks, in the form of BUILDID <-> BUILDDATE. The ** part of the regexp does not traverse them and thus doesn’t return twice the same file. I’ve tested that.
>
> OK, thanks (perhaps a comment would be useful, because next time I see that in a year I’ll want to “fix” it).

Added one in commit 266db6d.

> It’s still a bit unclear to me why we need **, though: are you implicitly suggesting that * would return multiple times the same file in this situation?

Let me try to explain:

For an exmaple build number 1 that finished on 2015/07/1302:33:57@, you get the artifacts stored in the archive subdirectory of:

/var/lib/jenkins/jobs/$JOB/builds/2015-07-13_02-33-57/

But it can also be accessed through a symlink to this directory in the form of:

/var/lib/jenkins/jobs/$JOB/builds/1/

And eventually if succeeded, also in another symlink as of:

/var/lib/jenkins/jobs/$JOB/builds/lastSucceeded/

So with a simple regexp, we would end with several time the same artifact. The ** part of the regexp ensure we don’t traverse the /1/ or /lastSucceeded/ symlink to /2015-07-13_02-33-57/, when a simple * would traverse them.

#36 Updated by bertagaz 2015-09-01 04:23:47

intrigeri wrote:
> > Done in puppet-tails.git:1f74b6d and tails.git:46ca4b6 in their respective feature/9597-jenkins-share-artifacts-between-jobs branch.
>
> Yay, looks good! The test is not needed in test -d "$ARTIFACTS_DIR" || install -m 0755 -d "$ARTIFACTS_DIR", so please drop it.

commit 5efafc8.

> > I think everything has been reviewed. Shall go on and deploy it?
>
> See the other comments and questions I’ve sent today.

We’re so much ticketing I have hard time to follow. :)

#37 Updated by bertagaz 2015-09-01 04:33:13

intrigeri wrote:
> > intrigeri wrote:
>
> >> Regarding commit:fcf4e65, do you mean that Jenkins creates and manages all those symlinks by itself, in a way that allows us to provide a stable URL to, say, the latest ISO built from feature/jessie? How would that URL look like?
>
> > Yes it does, Jenkins maintains symlinks that would render an URL like:
>
> > http://nightly.tails.boum.org/build_Tails_ISO_devel/builds/lastSuccessful or
> > http://nightly.tails.boum.org/build_Tails_ISO_devel/builds/lastFailed
>
> > as well as
>
> > http://nightly.tails.boum.org/build_Tails_ISO_devel/lastSuccessful
> > http://nightly.tails.boum.org/build_Tails_ISO_devel/lastFailed
>
> You seem to suggest that these URLs would point to the latest ISO (since that’s what I asked for). Is it what you meant?
>
> But currently the symlinks I see point to a directory (that possibly will later contain the build artifacts). Assuming I’m guessing right with the limited info I have, how does this buy us a stable URL to the latest ISO image built from one branch, given the filename of said ISO images is always different? (Sorry if I’m a bit insistent here, I just don’t understand how this whole thing will work, and your answer makes me a bit confused.)

Well ok, it doesn’t point directly to the last ISO, but to the last artifact directory. In place of sending a link like http://nightly.tails.boum.org/$JOB/latest.iso, the link to send would be http://nightly.tails.boum.org/$JOB/lastSuccessful/archive/.

I’m not sure it changes much in the end, the URL would be stable too.

In a browser one would get the file listing of that directory and just have to right click and save the link to the ISO, with a command-line tool, one can just use http://nightly.tails.boum.org/$JOB/lastSuccessful/archive/*.iso

#38 Updated by intrigeri 2015-09-01 05:08:40

> commit b1c6c49.
[…]
> Added one in commit 266db6d.

Perfect.

> So with a simple regexp, we would end with several time the same artifact. The ** part of the regexp ensure we don’t traverse the /1/ or /lastSucceeded/ symlink to /2015-07-13_02-33-57/, when a simple * would traverse them.

OK, got it. It’s a bit crazy that this obscure difference is not mentioned in the Dir.glob documentation (that I had checked before, that’s why I was asking :) Thanks for the explanation.

#39 Updated by intrigeri 2015-09-01 05:09:02

> commit 5efafc8.

Good.

#40 Updated by intrigeri 2015-09-01 05:26:50

> In a browser one would get the file listing of that directory and just have to right click and save the link to the ISO,

I find it harder to document that (e.g. in a call for testing) than simply “download that URL”. That one is maybe not a blocker, although locating the ISO among other similar filenames may be non-trivial (see the problem we have with the short filenames displayed in nginx’ directory index: http://nightly.tails.boum.org/build_Tails_ISO_feature-jessie/ — without short filenames one has to mouse hover the links, or to check the file size, to find the ISO; not fun).

> with a command-line tool, one can just use http://nightly.tails.boum.org/$JOB/lastSuccessful/archive/*.iso

I don’t know what command-line tool support that. At least wget and curl don’t. Note that there are already external things that rely on having stable URLs, e.g. the reproducible.debian.net cronjob that fetches the latest.iso.binpkgs for our Jessie builds. I would find it sad to introduce a regression here.

I don’t want to spend more time arguing about whether this regression is OK or not, I’d rather spend time finding a solution.

Let’s take a step back. commit:fcf4e65 justifies dropping the symlinks because they are not necessary anymore (which is the part I disagree with), but it doesn’t tell that the keeping the symlinks generation would not work or break anything. Would they break anything? In other words, can Jenkins’ artifacts archiving support retrieve symlinks? From there:

If it can, then we can simply:

  • revert that commit
  • adjust a bit to call the symlinks differently (the files they point to won’t always be the latest ones); something like tails.iso* should be good enough
  • add latest.iso* to the list of artifacts that shall be archived.

And then, every $JOB/*/archive/ directory will have these symlinks, and we can point both humans and external robots to http://nightly.tails.boum.org/$JOB/lastSuccessful/archive/tails.iso and friends.

If Jenkins master cannot retrieve symlinks from the slave, then it gets a little bit more complicated, and then I would understand that you’re reluctant to add this to your already pretty well stuffed plate. It would be easy enough to add a cronjob, on the master, that creates such symlinks in each $JOB/lastSuccessful/archive/ directory. This would perhaps better be done as part of the “refresh list of jobs” thing we already have. I could do it myself, I guess.

#41 Updated by bertagaz 2015-09-02 04:55:01

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

intrigeri wrote:
> > In a browser one would get the file listing of that directory and just have to right click and save the link to the ISO,
>
> I find it harder to document that (e.g. in a call for testing) than simply “download that URL”. That one is maybe not a blocker, although locating the ISO among other similar filenames may be non-trivial (see the problem we have with the short filenames displayed in nginx’ directory index: http://nightly.tails.boum.org/build_Tails_ISO_feature-jessie/ — without short filenames one has to mouse hover the links, or to check the file size, to find the ISO; not fun).

Right, that’s not as easy as the current state.

> > with a command-line tool, one can just use http://nightly.tails.boum.org/$JOB/lastSuccessful/archive/*.iso
>
> I don’t know what command-line tool support that. At least wget and curl don’t. Note that there are already external things that rely on having stable URLs, e.g. the reproducible.debian.net cronjob that fetches the latest.iso.binpkgs for our Jessie builds. I would find it sad to introduce a regression here.

I didn’t know that (or forgot it), so I get the interest.

> Let’s take a step back. commit:fcf4e65 justifies dropping the symlinks because they are not necessary anymore (which is the part I disagree with), but it doesn’t tell that the keeping the symlinks generation would not work or break anything. Would they break anything? In other words, can Jenkins’ artifacts archiving support retrieve symlinks?

It doesn’t seem to be the case actually: https://issues.jenkins-ci.org/browse/JENKINS-5597

> If Jenkins master cannot retrieve symlinks from the slave, then it gets a little bit more complicated, and then I would understand that you’re reluctant to add this to your already pretty well stuffed plate. It would be easy enough to add a cronjob, on the master, that creates such symlinks in each $JOB/lastSuccessful/archive/ directory. This would perhaps better be done as part of the “refresh list of jobs” thing we already have. I could do it myself, I guess.

The cronjob idea might work sure. I’ve made a first attempt at commit puppet-tails:4312b39

#42 Updated by intrigeri 2015-09-02 07:28:07

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

> It doesn’t seem to be the case actually: https://issues.jenkins-ci.org/browse/JENKINS-5597

Yeah, whatever :(

>> If Jenkins master cannot retrieve symlinks from the slave, then it gets a little bit more complicated, and then I would understand that you’re reluctant to add this to your already pretty well stuffed plate. It would be easy enough to add a cronjob, on the master, that creates such symlinks in each $JOB/lastSuccessful/archive/ directory. This would perhaps better be done as part of the “refresh list of jobs” thing we already have. I could do it myself, I guess.

> The cronjob idea might work sure. I’ve made a first attempt at commit puppet-tails:4312b39

Looks good, yay! We’re now ready to start the deployment plan, right? Or is there any leftover issue that I’m forgetting?

#43 Updated by bertagaz 2015-09-02 07:35:09

intrigeri wrote:
>
> > The cronjob idea might work sure. I’ve made a first attempt at commit puppet-tails:4312b39
>
> Looks good, yay! We’re now ready to start the deployment plan, right? Or is there any leftover issue that I’m forgetting?

Woa, no back and forth on this one, what happens? :D

Yeah, I think we covered everything, so I’ll deploy that probably tomorrow. Hurray!

#44 Updated by intrigeri 2015-09-02 07:52:46

> Woa, no back and forth on this one, what happens? :D

I must become sloppy in my reviewing, sorry. I’ll try to correct that :P

> Yeah, I think we covered everything, so I’ll deploy that probably tomorrow. Hurray!

Excellent! Perfect timing wrt. my vacation :)

#45 Updated by bertagaz 2015-09-03 00:51:12

intrigeri wrote:
> > Woa, no back and forth on this one, what happens? :D
>
> I must become sloppy in my reviewing, sorry. I’ll try to correct that :P

Well, let say that’s because I’m getting better at writing code that satisfy you :D

> > Yeah, I think we covered everything, so I’ll deploy that probably tomorrow. Hurray!
>
> Excellent! Perfect timing wrt. my vacation :)

Awesome! have fun!

#46 Updated by bertagaz 2015-09-04 09:49:58

  • % Done changed from 50 to 70

All the Jenkins related part have been deployed and is live. Woo! So far it works quite well.

The `latest symlink` feature has to be adapted though. One thing we didn’t know when we moved the artifacts to the /build-artifacts/ subdir, is that Jenkins will save them in the same directory layout, so in a /build-artifacts/ subdir of the archive directory.

The nightly.tails.boum.org part of the move is not done yet. Will try to do that before the end of the week-end.

#47 Updated by bertagaz 2015-09-05 02:01:41

bertagaz wrote:
> The `latest symlink` feature has to be adapted though. One thing we didn’t know when we moved the artifacts to the /build-artifacts/ subdir, is that Jenkins will save them in the same directory layout, so in a /build-artifacts/ subdir of the archive directory.

Done, commit puppet-tails:0c424a3

#48 Updated by bertagaz 2015-09-05 07:47:40

  • Assignee changed from bertagaz to intrigeri
  • % Done changed from 70 to 90
  • QA Check changed from Dev Needed to Ready for QA

bertagaz wrote:
> The nightly.tails.boum.org part of the move is not done yet. Will try to do that before the end of the week-end.

Done, nightly.t.b.o is now back online!

#49 Updated by bertagaz 2015-09-05 07:59:49

Emailed reproducible-buildsATlists.alioth.debian.org to point them to the new URL to the nightly build of the feature/jessie branch.

#50 Updated by intrigeri 2015-09-14 10:05:54

  • Assignee changed from intrigeri to bertagaz
  • % Done changed from 90 to 80
  • QA Check changed from Ready for QA to Dev Needed

I’ve reviewed the changes in lizard’s manifests and in the tails Puppet module. I see that quite a few changes I hadn’t reviewed yet were necessary after the initial deployement, and I’m not sure how to split my reviews between different tickets, so here is the entire thing:

In:

       find "${JENKINS_JOBS_TEST_DIR}/" -mindepth 1 | \
               xargs -r rm -rf

… using find’s -delete option would feel safer. If this doesn’t work here for some reason, please use find -print0 | xargs -0, which is the only way to make such constructs safe.

Same in:

+as_root_do find /tmp/TailsToaster/ -maxdepth 1 -name '*.png' | xargs -r mv -t $WORKSPACE/build-artifacts/

… if you really want to use | xargs instead of find -exec, please do use find -print0 | xargs -0.

Elsewhere, I see that previously unique exit codes are now reused with different meanings:

 
+[ ! -z $BRANCH ] || exit 2
+[ ! -z $COMMIT ] || exit 2
+[ ! -z $ISO ] || exit 2
+[ ! -z $PREVIOUS_ISO ] || exit 2
 [ ! -z $TEMP_DIR ] || exit 2

… and same for similar checks below. I’d rather keep each exit code we define mean something unique in any given script of ours.

I see:

+as_root_do chown -R jenkins $WORKSPACE/features/config

Why don’t we deploy the files in there with the permissions we want initially, instead of having to change them later on — did I miss something?

The “Rewrite of the ::tails::jenkins::artifact_store class” commit feels dubious to me: I understand why you did it, but the scope of responsibilities of the resulting class seems unclear, doesn’t match its own documentation (“Manage Jenkins artifacts”), and I’m not sure if it even matches its own name (especially since the class now mounts the filesystem read-only). Sounds like some refactoring is needed here. Conceptually, it seems to me that:

  • the “artifacts store” concept is now implemented on the Jenkins master side, with the mount (in nodes.pp + NFS server configured in tails::jenkins::master); it might be worth moving that block of functionality into a dedicated class (::artifact_store sounds like a good name) because the tails::jenkins::master class starts to be a bit too long for my taste, but that’s not a blocker; let’s keep it in mind, though, and address it before we pile up too much hard to maintain code in there; the plugins installation could also be moved to a named code block (a class);
  • the code in the artifact_store class is not about an artifact store anymore, but rather about connecting a artifact_store::client to that server, so it must be renamed accordingly.

Also, I could not find what class deploys files/jenkins/artifacts_store/remove_disabled_jobs_artifacts — is that file still needed?

#51 Updated by bertagaz 2015-09-15 06:41:44

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

intrigeri wrote:
> I’ve reviewed the changes in lizard’s manifests and in the tails Puppet module. I see that quite a few changes I hadn’t reviewed yet were necessary after the initial deployement, and I’m not sure how to split my reviews between different tickets, so here is the entire thing:

Yes, as noted ealier in the discussion, some bits couldn’t be prepared in adcance.

> In:
>
> […]
>
> … using find’s -delete option would feel safer. If this doesn’t work here for some reason, please use find -print0 | xargs -0, which is the only way to make such constructs safe.

Done, commit 63fff50

> Same in:
>
> […]
>
> … if you really want to use | xargs instead of find -exec, please do use find -print0 | xargs -0.

Done, I kept the xargs way of doing it because in my testing the -exec was not so reliable for that kind of operations. Commit 33ffc46

> Elsewhere, I see that previously unique exit codes are now reused with different meanings:
>
> […]
>
> … and same for similar checks below. I’d rather keep each exit code we define mean something unique in any given script of ours.

Hmm, ok I’ve misunderstood the kind of uniqness of this exit code. I thought it was related to the type of test, and not the variable beeing tested. But it makes sense, so did it in commit 9c62bc6.

> I see:
>
> […]
>
> Why don’t we deploy the files in there with the permissions we want initially, instead of having to change them later on — did I miss something?

I think because in their original deployment I wanted them to be read protected, but didn’t anticipated the problem it would raise later when deploying this test suite wrapper. I’ve corrected that in commit 8807e5a.

> The “Rewrite of the ::tails::jenkins::artifact_store class” commit feels dubious to me: I understand why you did it, but the scope of responsibilities of the resulting class seems unclear, doesn’t match its own documentation (“Manage Jenkins artifacts”), and I’m not sure if it even matches its own name (especially since the class now mounts the filesystem read-only). Sounds like some refactoring is needed here. Conceptually, it seems to me that:
>
> * the “artifacts store” concept is now implemented on the Jenkins master side, with the mount (in nodes.pp + NFS server configured in tails::jenkins::master); it might be worth moving that block of functionality into a dedicated class (::artifact_store sounds like a good name) because the tails::jenkins::master class starts to be a bit too long for my taste, but that’s not a blocker; let’s keep it in mind, though, and address it before we pile up too much hard to maintain code in there; the plugins installation could also be moved to a named code block (a class);
> * the code in the artifact_store class is not about an artifact store anymore, but rather about connecting a artifact_store::client to that server, so it must be renamed accordingly.

Yes, this artifacts_store rewrite was a bit sloppy, and it lost a bit of its meaning in the process. I wanted to get nigthly.t.b.o back up fast, so I did the most simple and effective. I agree it’s not the most elegant implementation, but is not a blocker for this ticket. Note I already suggested a while back to split this growing manifest and put e.g the plugins part in a dedicated class. Shall I open a new ticket for this artifacts_store thingie, child of Feature #10153?

> Also, I could not find what class deploys files/jenkins/artifacts_store/remove_disabled_jobs_artifacts — is that file still needed?

It’s not used anymore, as Jenkins takes care by itself of the removal of old jobs directories. I was reluctant to delete it still, even if Git would keep it in its history. Can do that if you insist, it’s not easy to delete scripts we write. :)

#52 Updated by intrigeri 2015-09-15 07:46:01

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

Everything I’m not commenting on looks good, congrats :)

> Yes, as noted ealier in the discussion, some bits couldn’t be prepared in adcance.

I forgot the details, and this ticket is getting too long for me to find them now => sorry if I’m repeating myself, but I find this statement quite puzzling. I see no technical reason why such changes can’t be developed and tested locally. I would greatly appreciate if you took time to investigate what it would take for you to have a dev + testing environment, so that thing can be prepared and tested in advance, and not on our production infrastructure. You reported earlier this year that our Puppet stuff is good enough to do that locally, so if the only blocker is hardware, then please specify it and ask for it via the appropriate channels.

> Done, I kept the xargs way of doing it because in my testing the -exec was not so reliable for that kind of operations.

OK.

> Commit 33ffc46

I’ve not tested it, but I don’t understand how it can possibly work without copying all files from the temp dir to the “build” artifacts one: IIRC -print0 before any selector will print all files, even those that don’t match the conditions. Compare find -name Changelog -print with find -print -name Changelog if in doubt :)

>> The “Rewrite of the ::tails::jenkins::artifact_store class” commit feels dubious to me:
> […] but is not a blocker for this ticket.

Renaming the offending class to tails::jenkins::artifact_store::client and updating its single line of documentation would be enough to fix biggest concerns I’ve expressed. I suggest we just do it instead of spending any more time arguing about whether it’s a blocker.

(And if you want to argue: I entirely disagree it’s not a blocker. Refactoring to leave the affected code structure in good shape may be the final part of the job, but it is part of doing the job right, especially when most of the job was precisely to reorganize responsibilities between pieces of code/infra, and spread them differently.)

> Note I already suggested a while back to split this growing manifest and put e.g the plugins part in a dedicated class. Shall I open a new ticket for this artifacts_store thingie, child of Feature #10153?

For the bits that we agree are not blockers right now, yes, please :) Not sure about the “child of Feature #10153” part, though, but perhaps that’s because I don’t understand what Feature #10153 really is about, in terms of project organization; specific milestones and “deliverable for” would clarify.

>> Also, I could not find what class deploys files/jenkins/artifacts_store/remove_disabled_jobs_artifacts — is that file still needed?

> It’s not used anymore, as Jenkins takes care by itself of the removal of old jobs directories. I was reluctant to delete it still, even if Git would keep it in its history. Can do that if you insist, it’s not easy to delete scripts we write. :)

OK, I’ve deleted it then.

#53 Updated by bertagaz 2015-09-15 09:50:01

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

intrigeri wrote:
> > Yes, as noted ealier in the discussion, some bits couldn’t be prepared in adcance.
>
> I forgot the details, and this ticket is getting too long for me to find them now => sorry if I’m repeating myself, but I find this statement quite puzzling. I see no technical reason why such changes can’t be developed and tested locally. I would greatly appreciate if you took time to investigate what it would take for you to have a dev + testing environment, so that thing can be prepared and tested in advance, and not on our production infrastructure. You reported earlier this year that our Puppet stuff is good enough to do that locally, so if the only blocker is hardware, then please specify it and ask for it via the appropriate channels.

It’s not hardware that I’m missing, it’s time. My plate is already full enough no to add one more day of work to install some VMs, deploy the puppet setup and manifests, and eventually correct the bugs that might be raised in our manifests from that, not to mention the back and forth reviewing it means. Also until now I’ve managed to test things in a way that helps to be confident in their deloyment.

Note that the “in advance” here relates to the artifacts_store rewrite, in particular the removal of the old NFS bits.

Also note that your other remarks made here are not tied to this ticket, but in part about a temporary hack to intensively test the test suite behavior on Lizard. This will also help to be know if it works well. Most (if not all) of them were polishing, and not bugs that would break everything.

> I’ve not tested it, but I don’t understand how it can possibly work without copying all files from the temp dir to the “build” artifacts one: IIRC -print0 before any selector will print all files, even those that don’t match the conditions. Compare find -name Changelog -print with find -print -name Changelog if in doubt :)

Hum, right, I think I spent more time on testing the other one and just quickly went through the man (1) find part related to this sole option. Also I’ve been working a lot this past days, and my brain starts to smoke. My bad. :)

Fixed now.

> >> The “Rewrite of the ::tails::jenkins::artifact_store class” commit feels dubious to me:
> > […] but is not a blocker for this ticket.
>
> Renaming the offending class to tails::jenkins::artifact_store::client and updating its single line of documentation would be enough to fix biggest concerns I’ve expressed. I suggest we just do it instead of spending any more time arguing about whether it’s a blocker.
>
> (And if you want to argue: I entirely disagree it’s not a blocker. Refactoring to leave the affected code structure in good shape may be the final part of the job, but it is part of doing the job right, especially when most of the job was precisely to reorganize responsibilities between pieces of code/infra, and spread them differently.)

I didn’t meant to argue, I just misunderstood the part of this comment your ‘not a blocker’ remark was referring to. Smoking brain bla.

Anyway, I did the renaming.

> For the bits that we agree are not blockers right now, yes, please :) Not sure about the “child of Feature #10153” part, though, but perhaps that’s because I don’t understand what Feature #10153 really is about, in terms of project organization; specific milestones and “deliverable for” would clarify.

That’s because I was talking about your whole comment on the artifacts_store shape. I won’t open a ticket for the Jenkins plugins refactoring, because it’s bothering me since a while, and now that you agree I’ll happily spend in the next days the two minutes it takes to do it.

#54 Updated by intrigeri 2015-09-15 10:48:22

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Ready for QA to Dev Needed

> It’s not hardware that I’m missing, it’s time. My plate is already full enough […]

OK — I understand it’s not possible to do that during this iteration, no problem.
I would greatly appreciate if it was done before the next similar big project, though :)

> Also note that your other remarks made here are not tied to this ticket, but in part about a temporary hack to intensively test the test suite behavior on Lizard […]

Point taken. The thing is, I usually review changes only, so if these hacks are not fully rewritten when they’re reused and put into production, there are great chances that I don’t review the original code (the one that needed polishing). This, plus my well-known tendency to be a control freak, explains why it was not too bad an idea to review the whole thing that was merged already.

> Hum, right, I think I spent more time on testing the other one and just quickly went through the man (1) find part related to this sole option. Also I’ve been working a lot this past days, and my brain starts to smoke. My bad. :)

:)

> Fixed now.

Not really. Simple test case:

$ mkdir bla
$ touch bla/bli.png
$ touch bla/debug.log
$ find bla -name '*.png' -o -name 'debug.log'
bla/debug.log
bla/bli.png
$ find bla -name '*.png' -o -name 'debug.log' -print
bla/debug.log

This should work better:

$ find bla \( -name '*.png' -o -name 'debug.log' \) -print
bla/debug.log
bla/bli.png

> Anyway, I did the renaming.

:))

Agreed with everything else.

#55 Updated by bertagaz 2015-09-15 12:03:34

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

intrigeri wrote:
> OK — I understand it’s not possible to do that during this iteration, no problem.
> I would greatly appreciate if it was done before the next similar big project, though :)

Ack, that would need to be taken into account in the scheduled time.

> > Also note that your other remarks made here are not tied to this ticket, but in part about a temporary hack to intensively test the test suite behavior on Lizard […]
>
> Point taken. The thing is, I usually review changes only, so if these hacks are not fully rewritten when they’re reused and put into production, there are great chances that I don’t review the original code (the one that needed polishing). This, plus my well-known tendency to be a control freak, explains why it was not too bad an idea to review the whole thing that was merged already.

Yes, makes sense. And as many things happened it’s a bit melted down right now.

> Not really. Simple test case:
>
> […]
>
> This should work better:
>
> […]

Erf, read this kind of syntax somewhere but never used it. Thanks for the tips, it sure seems to work better. Applied!

#56 Updated by intrigeri 2015-09-16 05:29:52

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Ready for QA to Dev Needed

>> OK — I understand it’s not possible to do that during this iteration, no problem. I would greatly appreciate if it was done before the next similar big project, though :)

> Ack, that would need to be taken into account in the scheduled time.

Of course. We would not budget pure coding time for a Tails feature without including the time needed to build and test ISO images before pushing to a production branch. Same here, because… “infrastructure as code” :)

>> This should work better:
>>
>> […]

> Erf, read this kind of syntax somewhere but never used it. Thanks for the tips, it sure seems to work better. Applied!

Great. Pushed a nitpicking commit on top (untested, sorry if I broke stuff), please review.

Now, the last thing I’d like to see before we mark this ticket as resolved is an example pair of jobs that actually share artifacts: that’s the whole point of the exercise this ticket is about. I think we now have all the foundations in place to make this possible (congrats!), and we just need an actual practical example to demonstrate it actually works.

These two jobs don’t have to do anything useful (and should not block on anything more complicated like the iso history thing). Actually the simpler these test cases are, the better: e.g. just have one job echo bla > test.iso, and the other grep -x -F bla test.iso.

If we have such a test case already, please just point me to it :)

#57 Updated by bertagaz 2015-09-16 06:26:52

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

intrigeri wrote:
> Of course. We would not budget pure coding time for a Tails feature without including the time needed to build and test ISO images before pushing to a production branch. Same here, because… “infrastructure as code” :)

:D

> Great. Pushed a nitpicking commit on top (untested, sorry if I broke stuff), please review.

Fine with me, no way this change breaks something.

> Now, the last thing I’d like to see before we mark this ticket as resolved is an example pair of jobs that actually share artifacts: that’s the whole point of the exercise this ticket is about. I think we now have all the foundations in place to make this possible (congrats!), and we just need an actual practical example to demonstrate it actually works.
>
> These two jobs don’t have to do anything useful (and should not block on anything more complicated like the iso history thing). Actually the simpler these test cases are, the better: e.g. just have one job echo bla > test.iso, and the other grep -x -F bla test.iso.
>
> If we have such a test case already, please just point me to it :)

We do not publicly yet, haven’t pushed my code already.

But my idea was simply to use wget to get the needed artifacts. I thought I could deal with just the build number to get the ISO, but it’s more complicated than that, so in the end we’ll also need the ISO name (thus we could have deal the same way with the old nightly setup in fact, but well… :/).

But if you want to test it, you can also ssh onto a isotester, login as the jenkins user and do a wget http://jenkins.lizard:8080/job/$JOB_NAME/$BUILD_NUMBER/artifact/build-artifacts/$ISO, you’ll see that it works.

#58 Updated by intrigeri 2015-09-16 06:39:34

  • Assignee changed from intrigeri to bertagaz

> But my idea was simply to use wget to get the needed artifacts. I thought I could deal with just the build number to get the ISO, but it’s more complicated than that,

Was it tried to instead push the ISO build job artifacts to the isotester? I’ve seen other projects do this with copyartifact. Sounds much simpler, no?

> so in the end we’ll also need the ISO name (thus we could have deal the same way with the old nightly setup in fact, but well… :/).

Can’t the latest.iso symlink help us avoid this problem?

#59 Updated by bertagaz 2015-09-16 06:59:23

  • Assignee changed from bertagaz to intrigeri

intrigeri wrote:
> > But my idea was simply to use wget to get the needed artifacts. I thought I could deal with just the build number to get the ISO, but it’s more complicated than that,
>
> Was it tried to instead push the ISO build job artifacts to the isotester? I’ve seen other projects do this with copyartifact. Sounds much simpler, no?

Yes, I’ve looked at it but was not feeling that teased to use one more old Jenkins plugins where a simple wget would work. In my mind, Jenkins plugin doesn’t mean simple, dunno why.

> > so in the end we’ll also need the ISO name (thus we could have deal the same way with the old nightly setup in fact, but well… :/).
>
> Can’t the latest.iso symlink help us avoid this problem?

Well, I thought about it, but felt like it was a bit racy now that this links are created by a cronjob, and not directly at the end of the build. I don’t think we should rely on that.

#60 Updated by intrigeri 2015-09-16 07:34:23

  • Assignee changed from intrigeri to bertagaz

> intrigeri wrote:
>> Was it tried to instead push the ISO build job artifacts to the isotester? I’ve seen other projects do this with copyartifact. Sounds much simpler, no?

> Yes, I’ve looked at it but was not feeling that teased to use one more old Jenkins plugins where a simple wget would work.

I’m confused. In my understanding, the whole point of moving the artifacts storage to Jenkins master’s native thing was precisely to let it know about those artifacts (and the job run ID -> artifacts files mapping), and so that, based on that knowledge the Jenkins master can itself send exactly the right artifacts to the ISO testers that need them. That’s how I understand the discussion we had 3 months ago around Feature #9597#note-4, that triggered all the NFS-related work you’ve done.

And now, we’re back to having isotesterN fetch artifacts themselves, which they could already very well do already in the previous setup… as long as they knew the exact ISO filename, which you realized we need to know as well with the wget solution you’re working on.

So I’m a bit lost, and don’t understand why we went through these big NFS changes if we’re not going to take benefit from the resulting setup in the end.

>> Can’t the latest.iso symlink help us avoid this problem?

> Well, I thought about it, but felt like it was a bit racy now that this links are created by a cronjob, and not directly at the end of the build. I don’t think we should rely on that.

Totally makes sense :)

#61 Updated by bertagaz 2015-09-17 07:35:54

intrigeri wrote:
> I’m confused. In my understanding, the whole point of moving the artifacts storage to Jenkins master’s native thing was precisely to let it know about those artifacts (and the job run ID -> artifacts files mapping), and so that, based on that knowledge the Jenkins master can itself send exactly the right artifacts to the ISO testers that need them. That’s how I understand the discussion we had 3 months ago around Feature #9597#note-4, that triggered all the NFS-related work you’ve done.
>
> And now, we’re back to having isotesterN fetch artifacts themselves, which they could already very well do already in the previous setup… as long as they knew the exact ISO filename, which you realized we need to know as well with the wget solution you’re working on.
>
> So I’m a bit lost, and don’t understand why we went through these big NFS changes if we’re not going to take benefit from the resulting setup in the end.

Well at least this made a good polishing of this nasty initial NFS deployment.

I’ll try to go ahead of my reluctance and give a try to the CopyArtifact plugin then. At least if we encounter troubles with it, we’ll be able to switch to a more hand made solution.

I’ll test it while working on Feature #10117 and the testing of jobs chaining.

#62 Updated by bertagaz 2015-09-23 01:30:41

  • Target version changed from Tails_1.6 to Tails_1.7

#63 Updated by sajolida 2015-10-01 07:28:46

  • Priority changed from Normal to Elevated

Woh! That’s a freaking long ticket; good luck!

Stil, note that this is due on October 15 which is actually a bit before Tails 1.7. Raising priority accordingly.

#64 Updated by bertagaz 2015-10-10 02:00:37

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Info Needed to Ready for QA

intrigeri wrote:
> Now, the last thing I’d like to see before we mark this ticket as resolved is an example pair of jobs that actually share artifacts: that’s the whole point of the exercise this ticket is about. I think we now have all the foundations in place to make this possible (congrats!), and we just need an actual practical example to demonstrate it actually works.

So I’ve created three new test jobs for the stable, devel and experimental branch that are triggered by their respective build jobs. They’re meant to be examples of what would the test jobs looks like.

The build job artifacts copying step is working as one can see in some last test jobs :

11:20:37 Copied 8 artifacts from "build_Tails_ISO_devel" build number 1280

So I guess we can consider this ticket done! :)

#65 Updated by intrigeri 2015-10-13 23:35:26

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Ready for QA to Dev Needed

Yay, looks great!

Can we get a less verbose wget output? See e.g. https://jenkins.tails.boum.org/job/test_Tails_ISO_devel/16/consoleFull around 19:19:59.

#66 Updated by bertagaz 2015-10-14 03:11:48

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

intrigeri wrote:
> Can we get a less verbose wget output? See e.g. https://jenkins.tails.boum.org/job/test_Tails_ISO_devel/16/consoleFull around 19:19:59.

Commit 474cb7d of the puppet-tails repo.

#67 Updated by intrigeri 2015-10-14 04:36:27

  • Status changed from In Progress to Resolved
  • Assignee deleted (intrigeri)
  • % Done changed from 80 to 100
  • QA Check changed from Ready for QA to Pass

Awesome, thanks!