Feature #12633
Lower the workload caused by reproducible builds Jenkins jobs
100%
Description
Currently, when I push a commit to a base branch, 3 ISOs are built from the same commit:
- one ISO is built by the
build_Tails_ISO_$branch
job; - two ISOs are built by the
reproducibly_build_Tails_ISO_$branch
job and then compared.
We really need to build only 2 of these ISOs to get what we want, so there’s room for improvement wrt. our CI infrastructure latency and throughput => usefulness.
I suggest instead having any successful build_Tails_ISO_$branch
job trigger another job, that would:
- get as input the ISO built by
build_Tails_ISO_$branch
(a checksum would be enough in the best case, but we really need that ISO to compare when they differ), and any relevant parameter e.g. the commit ID and base branch commit ID; - build a second ISO image (ideally with different build settings e.g. a time offset applied and a different CPU, but this is certainly not a blocker for this ticket)
- run diffoscope to compare these ISOs
- if these ISOs are the same, then delete all artifacts; otherwise, archive both ISO build artifacts + the diffoscope output
This looks very similar to what we already do for the test_Tails_ISO_*
jobs, so I expect it will be pretty cheap to implement.
And as a bonus, in most cases the 2 ISOs will be built on two different isobuilders, which increases the value of the reproducibility testing :)
Subtasks
Related issues
Related to Tails - |
Resolved | 2017-07-07 | |
Related to Tails - |
Resolved | 2017-11-06 | |
Related to Tails - |
Resolved | 2017-11-09 | |
Related to Tails - |
Resolved | 2017-11-10 | |
Related to Tails - Bug #14871: Deduplicate the ticket_qa_check Redmine query function into the Tails pythonlib | Confirmed | 2017-10-20 | |
Related to Tails - |
Resolved | 2017-10-22 |
History
#1 Updated by intrigeri 2017-06-07 16:50:01
- related to
Feature #12654: Introduce more variations in our reproducibility CI tests added
#2 Updated by intrigeri 2017-06-07 16:50:32
- Assignee changed from bertagaz to intrigeri
- Deliverable for deleted (
289)
I’ll take care of it unless you want it.
#3 Updated by bertagaz 2017-06-09 10:39:35
intrigeri wrote:
> I’ll take care of it unless you want it.
As you wish. That’s easy and fun to set up and good to play with our Jenkins jobs setup, but if you ever want to do something else, feel free to reassign to me.
#4 Updated by intrigeri 2017-06-09 14:08:19
> As you wish. That’s easy and fun to set up and good to play with our Jenkins jobs setup, but if you ever want to do something else, feel free to reassign to me.
Sure!
#5 Updated by intrigeri 2017-07-07 15:41:08
- blocks
Feature #13436: Have Jenkins jobs that reproduce ISOs when a branch ticket is Ready for QA added
#6 Updated by intrigeri 2017-07-07 15:42:11
- Deliverable for set to 289
#7 Updated by intrigeri 2017-07-25 09:46:52
- Description updated
#8 Updated by intrigeri 2017-07-30 08:41:53
- Priority changed from Normal to Elevated
- Target version changed from Tails_3.2 to Tails_3.3
I’ve scheduled some time to do this on October 9-10. Setting elevated prio as we’ll be close to the end of our contract.
#9 Updated by intrigeri 2017-09-18 16:42:47
- Assignee changed from intrigeri to bertagaz
bertagaz wrote:
> intrigeri wrote:
> > I’ll take care of it unless you want it.
> As you wish. That’s easy and fun to set up and good to play with our Jenkins jobs setup, but if you ever want to do something else, feel free to reassign to me.
Please go ahead.
#10 Updated by bertagaz 2017-10-18 14:04:04
intrigeri wrote:
> Please go ahead.
OK, so I’ve spent time trying to get how to implement that. That’s sadly not so easy.
The usual way to do it would be to trigger a build of the reproducibly_build_Tails_ISO
job from the build_Tails_ISO
job. But if we do that, we’d try to reproduce every ISOs that are built in Jenkins, that’s not what we want to do.
Other ways to trigger a build are not really usable: one of them is a builder (in JB language), which means it would be run before the build_Tails_ISO
job archives its ISO. Another way is to use a build-result trigger in the reproducibly_build_Tails_ISO
job, that would watch if a related build_Tails_ISO
job has succeeded. But that sounds like a racy solution, as it uses a cronjob to periodically check if a upstream job succeeded. What happen if another one did in the meantime it fires up a reproducibly_build_Tails_ISO job, it’s quite unclear, as it does not seem to have ways to refer to a particular upstream build number.
So the only way I’ve found to do that is to use the conditional-step
plugin in the reproducibly_build_Tails_ISO
job. It uses a script to determine whether it’s building a base branch and thus should run the whole reproducibly_build_Tails_ISO job. Bonus point is that with that we can also check if the branch ticket is RfQA (cf Feature #13436).
Downside of this solution is that this plugin depends on other plugins version that are a bit newer than what we have. So that’s very hard to say if it would work in our current infra.
The main blocker is the token-macro
plugin that requires to be 1.51 in any version of the conditional-step
plugin. But reading the later plugin source, it may be that it works well with our current version of the token macro plugin. It may also be that upgrading the token macro plugin won’t break anything, but a lot of other plugins depend on it. So that’s hard to say, and I don’t know how to cope with that appart from tackling Bug #10068, which is waaaaay out of scope.
I’ve pushed my work in the jenkins-jobs:feature/12633-lower-reproducible-builds-workload
branch and puppet-tails:feature/12633-lower-reproducible-builds-workload
if you want to have a look.
#11 Updated by bertagaz 2017-10-19 09:55:30
- Status changed from Confirmed to In Progress
- Assignee changed from bertagaz to intrigeri
- % Done changed from 0 to 30
- Feature Branch set to puppet-tails:feature/12633-lower-reproducible-builds-workload;jenkins-jobs:feature/12633-lower-reproducible-builds-workload
bertagaz wrote:
> So the only way I’ve found to do that is to use the conditional-step
plugin in the reproducibly_build_Tails_ISO
job.
Well, found another one. I’ve pushed changes in the branches that makes the decide_if_reproduce script output a file in the artifacts directory, and changed JJB builders to check its existence to trigger them. A simulation of the plugin with builders so that we don’t need it. Please have a look, I think it’s in a good shape enough to be tested, but I may have missed something.
#12 Updated by bertagaz 2017-10-19 09:56:47
- related to
Feature #13436: Have Jenkins jobs that reproduce ISOs when a branch ticket is Ready for QA added
#13 Updated by intrigeri 2017-10-20 08:43:46
- QA Check set to Ready for QA
My understanding is that I’m being asked to review the design you chose + an untested draft implementation. Adjusting metadata accordingly.
#14 Updated by intrigeri 2017-10-20 09:06:17
> The usual way to do it would be to trigger a build of the reproducibly_build_Tails_ISO
job from the build_Tails_ISO
job. But if we do that, we’d try to reproduce every ISOs that are built in Jenkins, that’s not what we want to do.
Indeed. To clarify, if we were only caring about this ticket i.e. base branches, it would be good enough: we could simply enable this trigger in the build jobs we generate for base branches. But Feature #13436 is also part of the plan, so that won’t be good enough.
> Another way is to use a build-result trigger in the reproducibly_build_Tails_ISO
job, that would watch if a related build_Tails_ISO
job has succeeded. But that sounds like a racy solution,
Thanks for staying away from solutions that require reasoning about asynchronous operations. This seems wise to me.
#15 Updated by bertagaz 2017-10-20 09:11:51
- related to Bug #14871: Deduplicate the ticket_qa_check Redmine query function into the Tails pythonlib added
#16 Updated by intrigeri 2017-10-20 09:37:05
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Dev Needed
> Well, found another one. I’ve pushed changes in the branches that makes the decide_if_reproduce script output a file in the artifacts directory, and changed JJB builders to check its existence to trigger them. A simulation of the plugin with builders so that we don’t need it.
The general concept seems OK to me, but I lack some Jenkins expertise to really judge, so here I’ll focus on general software architecture/design/implementation matters and on the resulting developer experience PoV.
My biggest worry is: the ISO test job be started in parallel with the reproducibly_build_Tails_ISO_{name}
one, or will it block on that one to be completed first?
It’s fine to duplicate the ticket_qa_check
code we already had during the PoC phase. But please refactor before this is called done (we already have a Python library so it should be quite easy).
Let’s not hard-code the list of base branches in yet another place: we have it already in our Python library. But again, that’s fine at the PoC stage.
The File['/usr/local/bin/decide_if_reproduce']
resource should depend on the script’s dependencies (e.g. the Python Redmine library).
Do we still need if args.apikey_file is None
now that that argument is required?
“the reproducible rebuild” feels wrong: we don’t know yet if it’ll be reproducible, the whole purpose is to check whether it is. Maybe “build reproducibility test” instead? In a few other places “reproduce” is used in ways that have the same semantics problem.
Now that arguments are required and the main logic is about computing a boolean value, the code could express what it does in a much more straightforward way with something like (untested):
if os.environ['JOB_NAME'] in BASE_BRANCHES_JOBS \
or (int(os.environ['TAILS_TICKET']) != 0 \
and ticket_qa_check(
apikeyfile=args.apikey_file,
ticket=os.environ['TAILS_TICKET']) == 'Ready for QA'):
open(args.reproduce_file, 'a').close()
What’s the purpose of checkout_upstream_base_branch_head
?
I’m not a big fan of the list of maybe_*
builders in reproducibly_build_Tails_ISO.yaml
; the flow control is unclear to me (e.g. will move_artifacts_to_dir_2
run if the 2nd ISO build fails?), it feels wrong to check the same boolean condition N times, and quite simply it’s very unclear what “maybe” means (one can’t know before reading the implementation, which raises a red flag with “badly named function” written on it). How about checking and aborting early if the boolean flag file is not present?
Linking to the artifacts of the last successful reproducibly_build_Tails_ISO_*
job seems to result of a mechanical change without much thought put into the resulting developer experience (which might be OK-ish at PoC stage but let’s please adopt a user-centric PoV ASAP in the design & development process, because experience showed that it’s hard to patch this back in later on): when an ISO can be reproducibly built a 2nd time, who cares about the artifacts? As a developer I’m more interested in 1. the artifacts for the last reproduction failure (that link could be adjusted accordingly, I guess); 2. whether the last reproduction attempt was successful (IIRC I can see that in the project’s sidebar so I guess it doesn’t need another link).
Let me know once these small issues have been fixed and more importantly, once the whole thing has been tested (bonus points if you give me hints wrt. testing it myself on my local Jenkins setup :)
#17 Updated by bertagaz 2017-10-20 10:04:01
intrigeri wrote:
> My biggest worry is: the ISO test job be started in parallel with the reproducibly_build_Tails_ISO_{name}
one, or will it block on that one to be completed first?
From what I understood, the jobs will be parallelized, as they’re not running on the same slave, and the test job does not depend on the reproducible job.
> It’s fine to duplicate the ticket_qa_check
code we already had during the PoC phase. But please refactor before this is called done (we already have a Python library so it should be quite easy).
Opened a ticket referenced here for that, because it’s a bit more work and time to get that in place in the infra. That’s less urgent than this ticket.
> What’s the purpose of checkout_upstream_base_branch_head
?
Well, we need to checkout the exact same state of the Git repo compared to the upstream build job, including the base branch at the same HEAD. Between the git clone stage in the build job, and the one in the reproducible job, there may have been pushes in the base branch.
> I’m not a big fan of the list of maybe_*
builders in reproducibly_build_Tails_ISO.yaml
; the flow control is unclear to me (e.g. will move_artifacts_to_dir_2
run if the 2nd ISO build fails?), it feels wrong to check the same boolean condition N times, and quite simply it’s very unclear what “maybe” means (one can’t know before reading the implementation, which raises a red flag with “badly named function” written on it). How about checking and aborting early if the boolean flag file is not present?
How do you do that in Jenkins? A build step is supposed to run successfuly or fail the build, it does not control the rest of the build steps chain and flow.
> Linking to the artifacts of the last successful reproducibly_build_Tails_ISO_*
job seems to result of a mechanical change without much thought put into the resulting developer experience (which might be OK-ish at PoC stage but let’s please adopt a user-centric PoV ASAP in the design & development process, because experience showed that it’s hard to patch this back in later on): when an ISO can be reproducibly built a 2nd time, who cares about the artifacts? As a developer I’m more interested in 1. the artifacts for the last reproduction failure (that link could be adjusted accordingly, I guess); 2. whether the last reproduction attempt was successful (IIRC I can see that in the project’s sidebar so I guess it doesn’t need another link).
I’m not sure to get whay you mean here. What link where, and what ‘mechanical change’?
#18 Updated by intrigeri 2017-10-20 10:38:58
>> It’s fine to duplicate the ticket_qa_check
code we already had during the PoC phase. But please refactor before this is called done (we already have a Python library so it should be quite easy).
> Opened a ticket referenced here for that, because it’s a bit more work and time to get that in place in the infra. That’s less urgent than this ticket.
Deal. I’m fine with postponing this a little bit but I’ve adjusted metadata of Bug #14871 to clarify what chunk of work it’s part of.
>> What’s the purpose of checkout_upstream_base_branch_head
?
> Well, we need to checkout the exact same state of the Git repo compared to the upstream build job, including the base branch at the same HEAD. Between the git clone stage in the build job, and the one in the reproducible job, there may have been pushes in the base branch.
I understand we need to use the base branch in the state it was in during the initial build. So far, so good. But I don’t understand how the checkout part will help: in auto/config
we do git merge --no-edit "origin/${GIT_BASE_BRANCH}"
, so I don’t see what checking out a local branch has to do with it. Perhaps I’m confused because I don’t know what origin
points to in the context of this job. Anyway, I don’t care much and I’m inclined to trust you on this one so let’s not bother. Please just ensure it works as intended when the whole thing is tested :)
>> I’m not a big fan of the list of maybe_*
builders in reproducibly_build_Tails_ISO.yaml
; the flow control is unclear to me (e.g. will move_artifacts_to_dir_2
run if the 2nd ISO build fails?), it feels wrong to check the same boolean condition N times, and quite simply it’s very unclear what “maybe” means (one can’t know before reading the implementation, which raises a red flag with “badly named function” written on it). How about checking and aborting early if the boolean flag file is not present?
> How do you do that in Jenkins?
I don’t know. Sorry, I can’t help you design this given the time constraints you’re working under. The best I can do is reviewing.
> A build step is supposed to run successfuly or fail the build, it does not control the rest of the build steps chain and flow.
Then perhaps a chain of super-fine-grained build steps is not the right approach?
>> Linking to the artifacts of the last successful reproducibly_build_Tails_ISO_*
job seems to result of a mechanical change without much thought put into the resulting developer experience (which might be OK-ish at PoC stage but let’s please adopt a user-centric PoV ASAP in the design & development process, because experience showed that it’s hard to patch this back in later on): when an ISO can be reproducibly built a 2nd time, who cares about the artifacts? As a developer I’m more interested in 1. the artifacts for the last reproduction failure (that link could be adjusted accordingly, I guess); 2. whether the last reproduction attempt was successful (IIRC I can see that in the project’s sidebar so I guess it doesn’t need another link).
> I’m not sure to get whay you mean here. What link where, and what ‘mechanical change’?
I’m talking about the links you’ve modified yesterday (in the non-atomical commit 11497165e109185df9a7e915f4028967b44643f8 in jenkins-jobs.git) as part of the branch you asked me to look at.
#19 Updated by bertagaz 2017-10-20 13:27:42
intrigeri wrote:
> >> What’s the purpose of checkout_upstream_base_branch_head
?
>
> I understand we need to use the base branch in the state it was in during the initial build. So far, so good. But I don’t understand how the checkout part will help: in auto/config
we do git merge --no-edit "origin/${GIT_BASE_BRANCH}"
, so I don’t see what checking out a local branch has to do with it. Perhaps I’m confused because I don’t know what origin
points to in the context of this job. Anyway, I don’t care much and I’m inclined to trust you on this one so let’s not bother. Please just ensure it works as intended when the whole thing is tested :)
In this step origin of this repo points to the immerda repo. In the cloned repo of this one, where the merge and build happen (so in Vagrant) in the build step, the origin is artificially set to the same immerda URI.
So if the HEAD of the base branch on immerda has changed between the cloning in the build_Tails_ISO job and the one in the reproducibly_build_Tails_ISO, we won’t merge the same base branch state.
We need to fake that the origin (immerda) base branch state is at the HEAD we want, so that in the Vagrant repo clone we’ll merge the same fake immerda origin base branch state.
From my tests, git-update-ref is the good tool for that.
> >> I’m not a big fan of the list of maybe_*
builders in reproducibly_build_Tails_ISO.yaml
; the flow control is unclear to me (e.g. will move_artifacts_to_dir_2
run if the 2nd ISO build fails?), it feels wrong to check the same boolean condition N times, and quite simply it’s very unclear what “maybe” means (one can’t know before reading the implementation, which raises a red flag with “badly named function” written on it). How about checking and aborting early if the boolean flag file is not present?
>
> > How do you do that in Jenkins?
>
> I don’t know. Sorry, I can’t help you design this given the time constraints you’re working under. The best I can do is reviewing.
I asked because I don’t know any means to do that in Jenkins myself, hence my design choices.
> > A build step is supposed to run successfuly or fail the build, it does not control the rest of the build steps chain and flow.
>
> Then perhaps a chain of super-fine-grained build steps is not the right approach?
If you’re implying we have to make a wrapper to run this 3 “maybe” steps, then I’m not sure it makes sense. That wrapper would run the build, download the ISO and compare both ISOs. I think having tiny atomic steps for this different things is much more clear, including when one of this step fails. In the case of a wrapper, we’ll have to dig a deep further in the logs to get why the job failed. Maybe we have to find a better naming scheme rather?
#20 Updated by bertagaz 2017-10-20 15:37:15
intrigeri wrote:
> The File['/usr/local/bin/decide_if_reproduce']
resource should depend on the script’s dependencies (e.g. the Python Redmine library).
commit puppet-tails:657d1cb
> Do we still need if args.apikey_file is None
now that that argument is required?
>
> Now that arguments are required and the main logic is about computing a boolean value, the code could express what it does in a much more straightforward way with something like (untested):
Right, commit puppet-tails:f58466e
> “the reproducible rebuild” feels wrong: we don’t know yet if it’ll be reproducible, the whole purpose is to check whether it is. Maybe “build reproducibility test” instead? In a few other places “reproduce” is used in ways that have the same semantics problem.
commit puppet-tails:a844c68
> > Linking to the artifacts of the last successful reproducibly_build_Tails_ISO_*
job seems to result of a mechanical change without much thought put into the resulting developer experience (which might be OK-ish at PoC stage but let’s please adopt a user-centric PoV ASAP in the design & development process, because experience showed that it’s hard to patch this back in later on): when an ISO can be reproducibly built a 2nd time, who cares about the artifacts? As a developer I’m more interested in 1. the artifacts for the last reproduction failure (that link could be adjusted accordingly, I guess); 2. whether the last reproduction attempt was successful (IIRC I can see that in the project’s sidebar so I guess it doesn’t need another link).
>
> I’m talking about the links you’ve modified yesterday (in the non-atomical commit 11497165e109185df9a7e915f4028967b44643f8 in jenkins-jobs.git) as part of the branch you asked me to look at.
Ah, ok, commit jenkins-jobs:ae00672
.
I’ve also simplified and improved a bit more the steps with commits jenkins-jobs:2df9768
and jenkins-jobs:11b0f1cf
#21 Updated by intrigeri 2017-10-21 07:16:08
Hi!
> intrigeri wrote:
>> >> What’s the purpose of checkout_upstream_base_branch_head
?
>>
>> I understand we need to use the base branch in the state it was in during the initial build. So far, so good. But I don’t understand how the checkout part will help: in auto/config
we do git merge --no-edit "origin/${GIT_BASE_BRANCH}"
, so I don’t see what checking out a local branch has to do with it. Perhaps I’m confused because I don’t know what origin
points to in the context of this job. Anyway, I don’t care much and I’m inclined to trust you on this one so let’s not bother. Please just ensure it works as intended when the whole thing is tested :)
> In this step origin of this repo points to the immerda repo. In the cloned repo of this one, where the merge and build happen (so in Vagrant) in the build step, the origin is artificially set to the same immerda URI.
> So if the HEAD of the base branch on immerda has changed between the cloning in the build_Tails_ISO job and the one in the reproducibly_build_Tails_ISO, we won’t merge the same base branch state.
> We need to fake that the origin (immerda) base branch state is at the HEAD we want, so that in the Vagrant repo clone we’ll merge the same fake immerda origin base branch state.
Thanks a lot for the explanation, it makes sense to me.
> From my tests, git-update-ref is the good tool for that.
OK. The branch was not using git update-ref
last time you asked me to look at it. Now I see it does, and indeed it makes much more sense to me this way.
>> >> I’m not a big fan of the list of maybe_*
builders in reproducibly_build_Tails_ISO.yaml
; the flow control is unclear to me (e.g. will move_artifacts_to_dir_2
run if the 2nd ISO build fails?), it feels wrong to check the same boolean condition N times, and quite simply it’s very unclear what “maybe” means (one can’t know before reading the implementation, which raises a red flag with “badly named function” written on it). How about checking and aborting early if the boolean flag file is not present?
>>
>> > How do you do that in Jenkins?
>>
>> I don’t know. Sorry, I can’t help you design this given the time constraints you’re working under. The best I can do is reviewing.
> I asked because I don’t know any means to do that in Jenkins myself, hence my design choices.
OK. Apparently you did have an implementation (with conditional-step
) that expressed more clearly what the deal is: before commit 11497165e109185df9a7e915f4028967b44643f8 (jenkins-jobs) that introduced the maybe_
confusion, things looked just fine. So from my non-expert PoV, it does seem to be possible to express cleanly what you mean in Jenkins.
>> > A build step is supposed to run successfuly or fail the build, it does not control the rest of the build steps chain and flow.
>>
>> Then perhaps a chain of super-fine-grained build steps is not the right approach?
> If you’re implying we have to make a wrapper to run this 3 “maybe” steps, then I’m not sure it makes sense. That wrapper would run the build, download the ISO and compare both ISOs. I think having tiny atomic steps for this different things is much more clear, including when one of this step fails. In the case of a wrapper, we’ll have to dig a deep further in the logs to get why the job failed. Maybe we have to find a better naming scheme rather?
I don’t understand but it’s not important (I don’t want to dive into this topic too deep myself). Don’t bother explaining the detailed reasons to me, I’ll focus on your actual code and will refrain wasting your time with random, poorly thought suggestions :)
I’ll look at this again tomorrow morning if this ticket has been assigned to me in the meantime (if you do that, please make it clear if/how the whole thing has been tested, and bonus points if you give me hints wrt. testing it myself on my local Jenkins setup). Then as you know, starting on Monday I’ll be away from Tails for a while.
#22 Updated by bertagaz 2017-10-21 09:35:29
- Assignee changed from bertagaz to intrigeri
- % Done changed from 30 to 50
- QA Check changed from Dev Needed to Ready for QA
intrigeri wrote:
> OK. The branch was not using git update-ref
last time you asked me to look at it. Now I see it does, and indeed it makes much more sense to me this way.
Yeah, I changed that part to something effective meanwhile.
> OK. Apparently you did have an implementation (with conditional-step
) that expressed more clearly what the deal is: before commit 11497165e109185df9a7e915f4028967b44643f8 (jenkins-jobs) that introduced the maybe_
confusion, things looked just fine. So from my non-expert PoV, it does seem to be possible to express cleanly what you mean in Jenkins.
But as explained in Feature #12633#note-10, it’s unlikely that we can install this conditionnal step plugin in the current state of our Jenkins infra, because of other plugin dependencies. That’s why I workarounded like that.
I’ve given a try renaming the steps for a bit more clarity in commit jenkins-jobs:25b8d7c
.
> I’ll look at this again tomorrow morning if this ticket has been assigned to me in the meantime (if you do that, please make it clear if/how the whole thing has been tested, and bonus points if you give me hints wrt. testing it myself on my local Jenkins setup). Then as you know, starting on Monday I’ll be away from Tails for a while.
Ok, then I think we’re good for another review. I did test the output of the decide_if_reproduce script, but did not test the whole thing in Jenkins. But it’s quite simple and straightforward, and I don’t expect that much problem. I was willing to deploy it and fix the small issues that may arise. Actually I think the step that may need refinement is the download_upstream_job_ISO_if_has_tried_reproduce
one.
If you wish to test that in your infra, you need to have the related branch merged in your puppet-tails repo so that your isobuilder host the decide_if_reproduce script, and your master has the new version of the job generation script (that is adding the reproducibly_build_Tails_ISO_ job to each Tails_ISO project).
You also need the changes in the jenkins-jobs repo, that is defining the new reproducibly_build_Tails_ISO job with a job-template, and adding it as a triggered job of the build_Tails_ISO one. Then, when the job generation script has run, you should have new reproducibly_build_Tails_ISO jobs for each active branch, and when triggered by their upstream build_Tails_ISO job they should fire a rebuild of the ISO if it’s a base branch, or its ticket is RfQA.
#23 Updated by intrigeri 2017-10-22 07:11:25
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Dev Needed
> intrigeri wrote:
>> OK. The branch was not using git update-ref
last time you asked me to look at it. Now I see it does, and indeed it makes much more sense to me this way.
> Yeah, I changed that part to something effective meanwhile.
I see. Meta: I’m not happy that I’ve spent time trying to understand why you did something in a way that seemed like it could not possibly work, only to realize now that indeed it could not work. If I ever commit to review your infra dev work again, a pre-condition will be that I only review stuff that has been successfully tested already.
>> OK. Apparently you did have an implementation (with conditional-step
) that expressed more clearly what the deal is: before commit 11497165e109185df9a7e915f4028967b44643f8 (jenkins-jobs) that introduced the maybe_
confusion, things looked just fine. So from my non-expert PoV, it does seem to be possible to express cleanly what you mean in Jenkins.
> But as explained in Feature #12633#note-10, it’s unlikely that we can install this conditionnal step plugin in the current state of our Jenkins infra, because of other plugin dependencies. That’s why I workarounded like that.
Your conclusion on that note was “It may also be that upgrading the token macro plugin won’t break anything, but a lot of other plugins depend on it. So that’s hard to say”. So, to me the obvious next step would be to try it out and see if this nicer solution works. But I understand you might not have the dev setup required to test such stuff, and if that’s the case, I guess you can’t afford investing time into setting this up a few days before your deadline for submitting this work.
So:
- let’s go for the workaround for the time being, apparently we don’t have any choice at this point given the timeline you’ve picked;
- if I’m guessing right, you can’t investigate (and implement) this much nicer solution now, because: 1. lack of a proper dev environment (as I’ve made clear a few times already, IMO it’s part of your job to get one); 2. lack of progress / realistic ETA on
Bug #10068(the fact we can’t install/upgrade to recent Jenkins plugins is become more and more of a blocker apparently, so please reply to my request for an ETA there). Given the two main explanations for the need of a rather ugly workaround are problems you are responsible for, and that could/should have been done a while ago, I’m expecting a follow-up ticket about replacing the workaround with the nicer implementation once either of these blockers is gone. If that doesn’t sound fair to you, I’m open to hearing about it & discussing this further :)
> I’ve given a try renaming the steps for a bit more clarity in commit jenkins-jobs:25b8d7c
.
This is better, thanks for putting efforts into it! if_tried_to_reproduce
would seem clearer to me than if_has_tried_reproduce
though: if you want to drop a word, better drop “has” than “to”, the result sounds more English and feels easier to understand to me.
>> I’ll look at this again tomorrow morning if this ticket has been assigned to me in the meantime (if you do that, please make it clear if/how the whole thing has been tested, and bonus points if you give me hints wrt. testing it myself on my local Jenkins setup). Then as you know, starting on Monday I’ll be away from Tails for a while.
> Ok, then I think we’re good for another review.
Looks OK except:
- the above comment wrt. step naming;
- I don’t understand how the wget command in
download_upstream_job_ISO_if_has_tried_reproduce
can possibly work. Did you mean to use--output-document=
? - If I got it right we’ll archive only the 2nd ISO to the artifacts directory. I saw no explanation why in the commit log, so I guess it’s about trading developer experience for storage space. If I guessed correctly, then I have two questions:
- Do we really need to make this trade-off? I thought we had taken into account these 2 ISOs when planning our storage.
- Assuming the answer to the previous question is “yes”: what’s the resulting developer story when investigating a failure to build reproducibly? IOW through which steps will I have to go to find & download the 1st ISO?
> I did test the output of the decide_if_reproduce script, but did not test the whole thing in Jenkins. But it’s quite simple and straightforward, and I don’t expect that much problem. I was willing to deploy it and fix the small issues that may arise. Actually I think the step that may need refinement is the download_upstream_job_ISO_if_has_tried_reproduce
one.
As said above, this project is the last one during which I’ll wear the reviewer hat with this workflow, which makes me more relaxed wrt. accepting it yet another time ⇒ whatever, go ahead with this plan. Please ensure (before deploying) you’re in a good position to notice issues & receive feedback: given you’re cut out of most relevant email currently, you’ll need to 1. be proactive wrt. checking issues; 2. let the affected people know how they can send you feedback (Redmine I guess, since the usual “report to tails-ci@” feedback channel is broken for you currently; this is not a theoretical issue, we’ve received such feedback earlier this week that you’ve missed).
> If you wish to test that in your infra,
I’m willing to test stuff that you have already successfully tested, but I’m not ready to be the first person who tests your work. Sorry I should have made this clear earlier.
#24 Updated by bertagaz 2017-10-22 11:47:33
- related to
Bug #14875: Build reproducibility Jenkins tests: confusing UX and implementation added
#25 Updated by bertagaz 2017-10-22 12:16:48
- % Done changed from 50 to 60
intrigeri wrote:
> […] Given the two main explanations for the need of a rather ugly workaround are problems you are responsible for, and that could/should have been done a while ago, I’m expecting a follow-up ticket about replacing the workaround with the nicer implementation once either of these blockers is gone. If that doesn’t sound fair to you, I’m open to hearing about it & discussing this further :)
I’m not, ticket created.
> This is better, thanks for putting efforts into it! if_tried_to_reproduce
would seem clearer to me than if_has_tried_reproduce
though: if you want to drop a word, better drop “has” than “to”, the result sounds more English and feels easier to understand to me.
Done.
> * I don’t understand how the wget command in download_upstream_job_ISO_if_has_tried_reproduce
can possibly work. Did you mean to use --output-document=
?
Yes, forgot it.
> * If I got it right we’ll archive only the 2nd ISO to the artifacts directory. I saw no explanation why in the commit log, so I guess it’s about trading developer experience for storage space. If I guessed correctly, then I have two questions:
> Do we really need to make this trade-off? I thought we had taken into account these 2 ISOs when planning our storage.
> Assuming the answer to the previous question is “yes”: what’s the resulting developer story when investigating a failure to build reproducibly? IOW through which steps will I have to go to find & download the 1st ISO?
As a developper with access to the Jenkins web interface, you’ll see the job chain, and then can see which build_Tails_ISO_build
job triggered the reproducibly_build_Tails_ISO
job, so you can easily get the ISO this way. Without access to Jenkins, you can find this information looking at the tails-build-env.list
file, which contains the upstream job build number. I get it’s not easy in this case, so I reverted this artifact archiving step to archive both ISOs. Now we planned this duplicate storage for our base branches only, not when we also try to reproduce branches for which ticket is RfQA.
> As said above, this project is the last one during which I’ll wear the reviewer hat with this workflow, which makes me more relaxed wrt. accepting it yet another time ⇒ whatever, go ahead with this plan. Please ensure (before deploying) you’re in a good position to notice issues & receive feedback: given you’re cut out of most relevant email currently, you’ll need to 1. be proactive wrt. checking issues; 2. let the affected people know how they can send you feedback (Redmine I guess, since the usual “report to tails-ci@” feedback channel is broken for you currently; this is not a theoretical issue, we’ve received such feedback earlier this week that you’ve missed).
I should be able to see if the deployment introduces bugs by myself, as it’s been the case when I deployed stuffs before.
#27 Updated by intrigeri 2017-10-29 08:26:10
The contract ends in 2 days. Do you still plan to deploy (and fix as needed) stuff by the end of the month? If not, please do communicate about your plans.
#28 Updated by bertagaz 2017-10-30 10:03:45
intrigeri wrote:
> The contract ends in 2 days. Do you still plan to deploy (and fix as needed) stuff by the end of the month? If not, please do communicate about your plans.
Yes.
#29 Updated by intrigeri 2017-10-30 16:58:39
>> The contract ends in 2 days. Do you still plan to deploy (and fix as needed) stuff by the end of the month? If not, please do communicate about your plans.
> Yes.
Amazing! :)
#30 Updated by bertagaz 2017-10-30 18:13:16
- Assignee changed from bertagaz to intrigeri
- % Done changed from 60 to 80
- QA Check changed from Dev Needed to Ready for QA
bertagaz wrote:
> > As said above, this project is the last one during which I’ll wear the reviewer hat with this workflow, which makes me more relaxed wrt. accepting it yet another time ⇒ whatever, go ahead with this plan. Please ensure (before deploying) you’re in a good position to notice issues & receive feedback: given you’re cut out of most relevant email currently, you’ll need to 1. be proactive wrt. checking issues; 2. let the affected people know how they can send you feedback (Redmine I guess, since the usual “report to tails-ci@” feedback channel is broken for you currently; this is not a theoretical issue, we’ve received such feedback earlier this week that you’ve missed).
>
> I should be able to see if the deployment introduces bugs by myself, as it’s been the case when I deployed stuffs before.
It’s been deployed and the few remaining bugs and adjustements have been fixed in commits referencing this ticket, in the puppet-tails
and jenkins-jobs
repos.
Working scenarios can be seen in today’s builds and reproducible builds of the stable
and Bug #10494 branch. The latest having being tested depending on its ticket Qa Check status. The same can be done to test how the jobs behave, on this branch (I don’t care a few email), or on the reviewer’s own branch.
I’ll write and email to tails-dev asap.
#31 Updated by intrigeri 2017-11-06 20:04:32
- related to
Bug #14924: reproducibly_build_Tails_ISO_stable Jenkins job always fails added
#32 Updated by intrigeri 2017-11-06 20:13:03
bertagaz wrote:
> It’s been deployed and the few remaining bugs and adjustements have been fixed in commits referencing this ticket, in the puppet-tails
and jenkins-jobs
repos.
> Working scenarios can be seen in today’s builds and reproducible builds of the stable
and Bug #10494 branch. The latest having being tested depending on its ticket Qa Check status. The same can be done to test how the jobs behave, on this branch (I don’t care a few email), or on the reviewer’s own branch.
I had a first quick look and the first thing I’ve noticed is Bug #14924: it’s definitely not your job to fix it, but please pay some more attention to what your work gives us (or fails to) in practice, and when you notice it actually gives us no useful info at all for one of our most important branches, please report it so whoever else feels responsible for it can fix the root cause. Thanks in advance!
I’ll fix Bug #14924 before I look at this closer; I might have to postpone this review to the 3.4 cycle given I had to take over quite some work from one of our colleagues; anyway, if I’m not mistaken there’s no hurry anymore as the deadline has passed and this was deployed in production pre-review already.
> I’ll write and email to tails-dev asap.
Yes, please let your users know about this change. You know the drill, not teaching them to ignore the very notifications we want them to read and all that.
#33 Updated by intrigeri 2017-11-08 11:08:08
First issues I’ve noticed with my “user of this system in real life” hat (for now I don’t care about the implementation details, I only care about my UX):
- I see the
tails-reproduce
file is archives as an artifact. This is confusing and distracting because I wonder “is there any info I should care about in there”? Granted, it’s a minor issue in isolation, but these things add up. - The
reproducibly_*
jobs are marked as succeeded when no actual rebuild was done. So if I’m reviewing a branch, in order to know whether it breaks reproducibility, I have to look at two things: the correspondingreproducibly_*
job status and the duration of the last run. If I only look at the status, then I might merge a branch whose reproducibility has not been checked yet, e.g. because I am reviewing it before our CI system noticed it’s now Ready for QA and flagged it for reproducibility testing accordingly (which would then be checked in the next 24h if I’m not mistaken). This is not consistent with how we display test suite run results and does not match my habits. I don’t think it’s OK to expect a reviewer to check the results of these two kinds of test jobs in two different ways. What I would like to see instead is: ideally, some kind of “not built” or “skipped” status (like the grey icon when a job has never run yet); worst case, if you can’t implement proper semantics, I think I prefer to see a reproducibility job that actually didn’t rebuild as failed rather than as successful. But wait, why do we generate thesereproducibly_build*
jobs at all when we already know they’ll be no-op and just add noise to the UI? It looks like we’re in some grey area between two designs that would make sense to me: either only create jobs that will actually do something useful, or generate them all and check at run time whether a rebuild must be done (== if Ready for QA, or base branch, or something). I’ll let you figure out how to fix this UX problem. It’s OK if the short-term fix is not super nice, as long as there’s some kind of plan to make it nicer later on (I’d rather see this fixed ASAP rather than blocking on whatever big thing is needed to make the whole thing more consistent).
#34 Updated by intrigeri 2017-11-08 11:41:10
Also, I suggest renaming the jobs: the reproducibly_build_
prefix does not correctly convey what they’re doing: we don’t know if they’re going to build “reproducibly”, that’s precisely what they’re trying to test. What about test_reproducibility_
instead?
#35 Updated by intrigeri 2017-11-08 16:59:41
bertagaz wrote:
> I get it’s not easy in this case, so I reverted this artifact archiving step to archive both ISOs.
Thanks, I’ve taken a look and indeed that’s much better :)
> Now we planned this duplicate storage for our base branches only, not when we also try to reproduce branches for which ticket is RfQA.
What do you plan to do about it? (I see no corresponding ticket related to this one.)
#36 Updated by intrigeri 2017-11-08 17:32:56
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Dev Needed
I think there’s a bug in the handling of the base branch merge or something related: https://jenkins.tails.boum.org/view/Tails_ISO/job/reproducibly_build_Tails_ISO_bugfix-14796-topicons-position/8/artifact/build-artifacts/diffoscope.html shows differing (merge) commit IDs, while the build logs pretend we’re merging the exact same commit (“at commit …”). Neither of these two commit IDs are in the official tails.git, hence my guess that a different merge commit ID is created for some reason unknown to me. I don’t know what’s going on here but it seems fishy. Please investigate and if this is Someone Else’s Problem™ file a ticket for them, otherwise fix it. I suspect it would help debugging if we passed --verbose
to this git merge
, and/or displayed the ID of the resulting merge commit, or similar.
I’m concerned about the up to 24h delay between when I flag a ticket as Ready for QA and when the branch is actually going to be tested for reproducibility: as a reviewer, when a branch is flagged as ready for QA, I already have access to the corresponding test suite results, so this feels inconsistent and adds a long delay during which Redmine pretends I should review something, but actually I can’t. I don’t know how to address this with the current design, which suggests the current design does not match our needs, but I’ll be happy to be proven wrong. If things really have to remain as-is for some reason (I doubt it but who known), then I think at the very least our reviewers doc should mention this problem.
I had a quick look at the code and it looks OK modulo all the issues that are tracked by other tickets. One single comment: “for the test and reproducible child job” seems unclear, basically it has the same problem as the job names as reported earlier.
#37 Updated by intrigeri 2017-11-10 10:23:32
- related to
Bug #14944: jenkins-data-disk is running out of diskspace added
#38 Updated by intrigeri 2017-11-10 15:36:36
intrigeri wrote:
> I think there’s a bug in the handling of the base branch merge or something related: https://jenkins.tails.boum.org/view/Tails_ISO/job/reproducibly_build_Tails_ISO_bugfix-14796-topicons-position/8/artifact/build-artifacts/diffoscope.html shows differing (merge) commit IDs, while the build logs pretend we’re merging the exact same commit (“at commit …”). Neither of these two commit IDs are in the official tails.git, hence my guess that a different merge commit ID is created for some reason unknown to me. I don’t know what’s going on here but it seems fishy. Please investigate and if this is Someone Else’s Problem™ file a ticket for them, otherwise fix it. I suspect it would help debugging if we passed --verbose
to this git merge
, and/or displayed the ID of the resulting merge commit, or similar.
I was annoyed by this bug while working on Bug #14933 so I’ve investigated a bit and filed Bug #14946. I’ll take care of it.
#39 Updated by intrigeri 2017-11-10 15:36:50
- related to
Bug #14946: Topic branches that lag behind their base branch don't build reproducibly with mergebasebranch build option added
#40 Updated by anonym 2017-11-15 11:30:47
- Target version changed from Tails_3.3 to Tails_3.5
#41 Updated by intrigeri 2017-11-18 19:56:13
- Deliverable for changed from 289 to 301
(The problem that had IMO to be solved before the deadline was solved. Let’s handle the leftovers as every other “internal” one.)
#42 Updated by intrigeri 2017-12-17 15:02:27
When I receive email notifications like “Build failed in Jenkins: reproducibly_build_Tails_ISO_bugfix-15029-apparmor-cups-backends”, I would like to be able to quickly dismiss the obvious cause of non-reproducibility (different APT snapshots used for the 2nd build). Currently I don’t do it because it simply happens too often, there’s no direct link to the diffoscope output, and I can’t be bothered switching to my web browser to go to the Jenkins web UI. But I’m pretty sure I would bother investigating if the email notification directly included the content of diffoscope.txt
. So I propose that after running diffoscope, if diffoscope.txt
is small enough (say, not more than 64KB), we cat
it so its content ends up in the email notification. What do you think? If you agree I’ll implement it.
#43 Updated by intrigeri 2018-01-03 13:18:05
intrigeri wrote:
> So I propose that after running diffoscope, if diffoscope.txt
is small enough (say, not more than 64KB), we cat
it so its content ends up in the email notification.
Pushed something, let’s see how it goes.
#44 Updated by intrigeri 2018-01-04 05:42:19
intrigeri wrote:
> intrigeri wrote:
> > So I propose that after running diffoscope, if diffoscope.txt
is small enough (say, not more than 64KB), we cat
it so its content ends up in the email notification.
>
> Pushed something, let’s see how it goes.
Seems to work: https://jenkins.tails.boum.org/view/Tails_ISO/job/reproducibly_build_Tails_ISO_web-14997-explain-better-verification-and-failure/20/console. I’ve not seen the corresponding email notification, but given the ones I see include the end of the console output up to “Archiving artifacts”, we should be good :)
#45 Updated by anonym 2018-01-23 19:52:27
- Target version changed from Tails_3.5 to Tails_3.6
#46 Updated by bertagaz 2018-03-14 11:32:05
- Target version changed from Tails_3.6 to Tails_3.7
#47 Updated by intrigeri 2018-04-08 17:10:35
- blocked by deleted (
)Feature #13436: Have Jenkins jobs that reproduce ISOs when a branch ticket is Ready for QA
#48 Updated by intrigeri 2018-04-08 17:19:28
- related to deleted (
)Feature #12654: Introduce more variations in our reproducibility CI tests
#49 Updated by intrigeri 2018-04-08 18:09:31
- related to Bug #14871: Deduplicate the ticket_qa_check Redmine query function into the Tails pythonlib added
#50 Updated by intrigeri 2018-04-08 18:09:36
- related to
Bug #14875: Build reproducibility Jenkins tests: confusing UX and implementation added
#51 Updated by intrigeri 2018-04-08 18:10:52
- Status changed from In Progress to Resolved
- Priority changed from Normal to Elevated
- % Done changed from 0 to 100
Let’s call this done and track the problems identified by the post-deployment review on dedicated tickets: Bug #14871 and Bug #14875.