Bug #12629

Document reproducible release process

Added by Anonymous 2017-06-02 12:43:59 . Updated 2019-05-06 18:15:37 .

Status:
Resolved
Priority:
High
Assignee:
Category:
Target version:
Start date:
2017-06-02
Due date:
% Done:

100%

Feature Branch:
Type of work:
Contributors documentation
Blueprint:

Starter:
Affected tool:
Deliverable for:
301

Description

We need to update the release process documentation to take care of reproducible ISOs and IUKs.


Files


Subtasks


Related issues

Related to Tails - Feature #16052: Document post-release reproducibility verification for IUKs Confirmed 2018-10-15
Has duplicate Tails - Feature #12628: Draft a "user" (aka. RM) story for the reproducible release process Duplicate 2017-06-02

History

#1 Updated by Anonymous 2017-06-02 12:44:11

  • related to Feature #12628: Draft a "user" (aka. RM) story for the reproducible release process added

#2 Updated by intrigeri 2017-06-02 12:49:56

  • Status changed from New to Confirmed

#3 Updated by intrigeri 2017-06-10 15:22:18

  • Target version set to Tails_3.2

I see two main aspects here, that I’ll discuss first for ISOs and then for IUKs.

For the ISO image:

  • ensure at least N entities produced the same ISO: developers laptop? CI infra? where do we set the bar?
  • avoid having to upload the ISO at release time, and while we’re at it, fix the “Upload images” section of the release process doc )AFAIK no RM has actually followed it as-is since years); so, instead of pretending we seed the ISO and then copy it from bittorrent.lizard to rsync.lizard, we should probably instead:
    • scp the detached signature to rsync.lizard
    • ssh rsync.lizard and wget the ISO built by Jenkins
    • verify the detached signature
    • scp the Torrent to bittorrent.lizard
    • ssh bittorrent.lizard, wget the ISO built by Jenkins, add to Transmission

For IUKs:

  • “ensure at least N entities produced the same” still applies, modulo we don’t build them on our CI so only developers can reproduce them;
  • regarding publication, it’s a bit more subtle since we don’t build them on our CI so one needs to upload them (which apparently is not documented yet BTW).

Out of personal interest I might give “avoid having to upload the ISO” a try during the 3.0 release process, in which case I’ll probably draft the needed changes in a branch; then anonym can test & polish them.

#4 Updated by intrigeri 2017-06-10 16:59:16

  • Status changed from Confirmed to In Progress
  • % Done changed from 0 to 10
  • Feature Branch set to doc/12629-reproducible-release-process

Draft written and tested for the ISO publication. I’ll push once I’ve tested the bits about the IUKs and Torrent.

#5 Updated by intrigeri 2017-06-10 17:37:33

Pushed! My branch addresses the “ensure at least N entities produced the same” part, but the other part is left as an exercise to the reader^W^Wanonym :)

#6 Updated by intrigeri 2017-06-22 13:50:58

  • related to deleted (Feature #12628: Draft a "user" (aka. RM) story for the reproducible release process)

#7 Updated by intrigeri 2017-06-22 13:51:12

  • has duplicate Feature #12628: Draft a "user" (aka. RM) story for the reproducible release process added

#8 Updated by intrigeri 2017-07-06 07:05:47

  • Feature Branch deleted (doc/12629-reproducible-release-process)

The updated doc on the branch has worked fine during the 3.0.1 release process, so I’m merging it. This is not everything this ticket is about though.

#9 Updated by intrigeri 2017-09-07 08:46:41

  • Priority changed from Normal to Elevated

It would be nice if this was drafted in time to be tested while releasing 3.2~rc1, so we can polish it as needed and test a final version during the 3.2 release process.

#10 Updated by intrigeri 2017-09-07 12:40:18

anonym plans to do this post-3.2-freeze.

#11 Updated by anonym 2017-09-26 22:29:17

  • Assignee changed from anonym to intrigeri
  • % Done changed from 10 to 50
  • QA Check set to Ready for QA

I’m a bit confused. I followed the updated release process both for 3.0.1, 3.2~rc1 and 3.2, and it (beautifully!) does the ISO/IUK publication parts, but contrary to Bug #12629#note-5 it does not address the “ensure at least N entities produced the same ISO/IUK” part (well, it verifies that jenkins built the same ISO, but I don’t think that counts! :)). In fact, AFAICT, that is the only part that is left for this ticket, except for some sort of contingency plan in case reproduction fails. If I have missed anything else, please let me know!

For the first issue, I am going to be conservative and suggest that we simply say that N=2 (note: for ISOs we also implicitly require that Jenkins reproduces the ISO due to the publishing instructions, but this is orthogonal to our requirements on N, imho). As for the requirements of these N persons, we could require them to be members of tails-rm, but that is a bit limiting (there's only three of us currently). Same with tails@@ unless we make more of its members able to build Tails. If we go beyond that we don’t really have a security policy to rely on. Hm. Well, for now I don’t think I dare proposing us to require anything less than tails@ membership — I’ll lobby for more of them having Tails build setups ready for Tails 3.3! :) Note that in practice that means the “participants” are the RM + one more tails@ member (since all possible RMs themselves are members of tails@).

For the second issue, the case where reproduction fails, I say, essentially: make a quick investigation. If the issue can be fixed cheaply, then rebuild. Otherwise, let’s not delay the release, just make sure there’s no backdoor, random bit flip, or something else serious. To me this seems to strike the right balance between taking reproducibility seriously (and benefiting from all its nice security implications) vs the cost for our users by delaying releases.

I went ahead and pushed my ready proposal straight to testing (so it will end up in master soon) since I thought it won’t make things worse, at least. :) See commit:412b64f418a0261b199fce266b4b3fddcced65e2. Note the four levels of bullet points, oh yeah!!!! >:)

What do you think? I’m also asking for your opinions, Ulrike and bertagaz!

[BTW, Ulrike, my secret plan is to make you set up a build environment fore Tails 3.3 so you can help reproduce it! :)]

#12 Updated by intrigeri 2017-09-27 07:32:21

  • Assignee changed from intrigeri to anonym
  • QA Check changed from Ready for QA to Dev Needed

Hi!

> I’m a bit confused. I followed the updated release process both for 3.0.1, 3.2~rc1 and 3.2, and it (beautifully!) does the ISO/IUK publication parts, but contrary to Bug #12629#note-5 it does not address the “ensure at least N entities produced the same ISO/IUK”

Right, copy’n’paste error, sorry. I think I meant to paste “avoid having to upload the ISO at release time” instead, but that was 4 months ago, so well…

> For the first issue, I am going to be conservative and suggest that we simply say that N=2 (note: for ISOs we also implicitly require that Jenkins reproduces the ISO due to the publishing instructions, but this is orthogonal to our requirements on N, imho). As for the requirements of these N persons, we could require them to be members of tails-rm, but that is a bit limiting (there's only three of us currently). Same with tails@@ unless we make more of its members able to build Tails. If we go beyond that we don’t really have a security policy to rely on. Hm. Well, for now I don’t think I dare proposing us to require anything less than tails@ membership — I’ll lobby for more of them having Tails build setups ready for Tails 3.3! :) Note that in practice that means the “participants” are the RM + one more tails@ member (since all possible RMs themselves are members of tails@).

OK, let’s try this and we can adjust later if needed.

> For the second issue, the case where reproduction fails, I say, essentially: make a quick investigation. If the issue can be fixed cheaply, then rebuild. Otherwise, let’s not delay the release, just make sure there’s no backdoor, random bit flip, or something else serious. To me this seems to strike the right balance between taking reproducibility seriously (and benefiting from all its nice security implications) vs the cost for our users by delaying releases.

I don’t know about the “right balance” as it greatly depends on what we’re trying to achieve, which is unclear to me: I have no idea what high-level goal you’re trying to achieve with this doc and the doc itself seems both confused and confusing to me in this respect. Here you write “make sure there’s no backdoor” and in commit:412b64f418a0261b199fce266b4b3fddcced65e2 you write “try to rule out that the RM has gone rouge by including a backdoor”. But the way I understand this updated doc, both comparing hashes, “immediately compare the ISOs” and deciding whether the non-determinism matters are the RM’s job. Besides, nothing seems to prevent the RM from actually releasing a different ISO than the one that other people built identically (or manually tested, by the way).

So let’s please start by clarifying the goals; at first glance, either the bar must be set quite lower (one should check whether it still matches the set of goals and expected benefits we told the sponsor about though), or we need a different verification and decision-making process.

#13 Updated by anonym 2017-09-27 10:13:45

  • Assignee changed from anonym to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

intrigeri wrote:
> > I’m a bit confused. I followed the updated release process both for 3.0.1, 3.2~rc1 and 3.2, and it (beautifully!) does the ISO/IUK publication parts, but contrary to Bug #12629#note-5 it does not address the “ensure at least N entities produced the same ISO/IUK”
>
> Right, copy’n’paste error, sorry. I think I meant to paste “avoid having to upload the ISO at release time” instead, but that was 4 months ago, so well…

As long as you think I didn’t miss anything you already had thought about, I am happy.

> > For the second issue, the case where reproduction fails, I say, essentially: make a quick investigation. If the issue can be fixed cheaply, then rebuild. Otherwise, let’s not delay the release, just make sure there’s no backdoor, random bit flip, or something else serious. To me this seems to strike the right balance between taking reproducibility seriously (and benefiting from all its nice security implications) vs the cost for our users by delaying releases.
>
> I don’t know about the “right balance” as it greatly depends on what we’re trying to achieve, which is unclear to me: I have no idea what high-level goal you’re trying to achieve with this doc and the doc itself seems both confused and confusing to me in this respect.

Like I said, the high-level goal is to “[benefit] from [reproducibility’s] nice security implications”. Looking at the “why” section of our blueprint, I believe the only one worth mentioning is: “independent verification that a build product matches what the source intended to produce ⇒ better resist attacks against build machines and developers”. (I could also mention e.g. “No more bit flip[s]” since they theoretically could silently degrade security, but I think the other goal is enough.)

By requiring two tails@ members we have at least eliminated the single point of failure that could lead to Tails being backdoored through build system compromise. IMHO the only thing that would substantially improve this would be wide-spread involvement of third parties, but let’s take it easy with that for now. :) As for “the cost for our users by delaying releases”, I think it’s pretty clear that from their PoV it is more important to get security updates than waiting for a fix for a proven non-malicious reproducibility problem.

> Here you write “make sure there’s no backdoor” and in commit:412b64f418a0261b199fce266b4b3fddcced65e2 you write “try to rule out that the RM has gone rouge by including a backdoor”.

Your quote excludes the “:)” — this was a poor attempt at self-deprecating humor, which I then started to take semi-seriously (after I wrote it, I actually went over my text and made a few changes that I thought would make the process more robust against a rouge RM without adding extra cost), and this only muddied the waters (sorry!).

While I do think that just having this process involving other trusted (and semi-paranoid! :)) people makes it harder for the RM to go rouge (or more likely: backdoor the ISO under legal duress) the process certainly isn’t robust against a dishonest RM, and there’s still other vectors that our current reproducibility model doesn’t take into account (e.g. the packages in our custom APT repo). Let’s forget about the “going rouge”/“legal duress” part, and re-focus towards “resist attacks against build machines”. That is commit:7479d7070ae871fa894ca5b3eed448948ee0591a.

> But the way I understand this updated doc, both comparing hashes, “immediately compare the ISOs” and deciding whether the non-determinism matters are the RM’s job.

My idea was actually that everyone involved compares hashes, and in the event of mismatch stays involved in the process, awaiting a plausible explanation from the RM that they can verify (possibly by involving e.g. another tails-rm@ member, that should be up for the task). Any way, let’s forget about this, and indeed make all this the RMs job, that everyone just trusts.

> Besides, nothing seems to prevent the RM from actually releasing a different ISO than the one that other people built identically (or manually tested, by the way).

Agreed, the RM could replay someone else’s hash, so some care would have to made about the order these things are communicated (i.e. RM must send it first). Any way, let’s forget about this!

> So let’s please start by clarifying the goals; at first glance, either the bar must be set quite lower (one should check whether it still matches the set of goals and expected benefits we told the sponsor about though), or we need a different verification and decision-making process.

Is the situation clearer/saner now?

#14 Updated by intrigeri 2017-09-28 10:04:26

  • Assignee changed from intrigeri to anonym
  • QA Check changed from Ready for QA to Info Needed

Hi!

anonym:
> intrigeri wrote:

> Like I said, the high-level goal is to “[benefit] from [reproducibility’s] nice security implications”. Looking at the “why” section of our blueprint, I believe the only one worth mentioning is: “independent verification that a build product matches what the source intended to produce ⇒ better resist attacks against build machines and developers”. (I could also mention e.g. “No more bit flip[s]” since they theoretically could silently degrade security, but I think the other goal is enough.)

I see.

> By requiring two tails@ members we have at least eliminated the single point of failure that could lead to Tails being backdoored through build system compromise.

Agreed.

> IMHO the only thing that would substantially improve this would be wide-spread involvement of third parties, but let’s take it easy with that for now. :)

Right. See below for the cheapest such improvement I have in mind.

> While I do think that just having this process involving other trusted (and semi-paranoid! :)) people makes it harder for the RM to go rouge (or more likely: backdoor the ISO under legal duress) the process certainly isn’t robust against a dishonest RM, and there’s still other vectors that our current reproducibility model doesn’t take into account (e.g. the packages in our custom APT repo). Let’s forget about the “going rouge”/“legal duress” part, and re-focus towards “resist attacks against build machines”. That is commit:7479d7070ae871fa894ca5b3eed448948ee0591a.

Fair enough.

Bonus nitpicking: this does not fully achieve “resist attacks against build machines”, as in theory a compromised RM’s machine can replace the reproduced + tested ISO/IUK with other ones. All it takes is to 1. steal the smartcard PIN code while the RM is busy typing that PIN numerous times for signing UDFs, and seize this opportunity to sign other data while the smartcard is plugged; 2. steal the RM’s SSH credentials; 3. upload replacement ISO + IUKs after we’ve reproduced+tested the genuine ones; 4. push replacement IDF and UDFs to Git using the stolen credentials (I doubt we would reliably notice that unless someone carefully verifies what happens in Git in and around the “merge new release to master” commit every time). Granted, that’s quite theoretical but I think we should take highly sophisticated adversaries into account in this context. And granted too, our custom APT repo gives such adversaries much easier (and harder to detect) attack vectors at the moment. I’m not arguing in favour of trying to fix this remaining problem, I just want us to be super clear about what we think we’re achieving here, both in our own minds and in our external communication :)

So to be extra clear, ignoring my nitpicking above, these two documented reasons “Why we want reproducible builds” are not achieved yet:

  • “the incentive for an attacker […] to compromise developers themselves, is lowered”
  • “In turn, this avoids the need to trust people (or software) who build the ISO we release, which in turn allows more people to get involved in release management work.”

Let’s keep this in mind when communicating the benefits to our users and when writing the design doc. It’s not obvious as it differs from our currently documented stated goals. So I’d like to see a note about this, pointing here, on the corresponding 2-3 tickets. But wait, see my proposal below.

>> But the way I understand this updated doc, both comparing hashes, “immediately compare the ISOs” and deciding whether the non-determinism matters are the RM’s job.

> My idea was actually that everyone involved compares hashes, and in the event of mismatch stays involved in the process, awaiting a plausible explanation from the RM that they can verify (possibly by involving e.g. another tails-rm@ member, that should be up for the task). Any way, let’s forget about this, and indeed make all this the RMs job, that everyone just trusts.

OK, that makes it much clearer (as IMO the proposed doc does not correctly implement the idea you previously had in mind).

>> So let’s please start by clarifying the goals; at first glance, either the bar must be set quite lower (one should check whether it still matches the set of goals and expected benefits we told the sponsor about though), or we need a different verification and decision-making process.

> Is the situation clearer/saner now?

Yes, it’s much clearer.

Now, I’d like to propose having a third-party (e.g. another Foundations Team member, i.e. most often myself) check, shortly after the release goes live, that the published ISO image matches both the published tag and what manual testers have tested. The additional work this requires is:

  • File a ticket about this for every release in advance (so we don’t rely on the RM to file it… or not, due to being under duress or tired/sloppy).
  • When sending the call for manual testing, the RM attaches the detached signature.
  • The reproducer rebuilds the ISO from the tag and verifies it matches:
    • the published detached signature + hash found in the IDF
    • the detached signature previously emailed by the RM for manual testing
  • The reproducer rebuilds the IUKs and checks that their hash matches:
    • the published UDFs
    • what the RM pushed to the test channel for manual testing
  • Document the above.

Cost:

  • The additional recurring work for reproducing seems quite small, and would be mostly on my plate. Some of it can be automated as I get bored doing it manually every time.
  • The initial Redmine + documentation work seems pretty small too. I can do it the first time I play the post-release reproducer role.
  • The additional work for the RM is limited to “attach the detach signature to an email once during each release process”, which seems totally negligible compared to our RM’ing time budget.

Benefit: this addresses most of the concerns I’ve raised, and gives us part of the two goals you’re proposing we drop, i.e. we don’t have to trust the RM and their machine to actually publish the ISO + IUK that have been reproduced and manually tested. I am aware this does not fully protect against a corrupt RM person nor machine due to other, unrelated attack vectors (i.e. various trusted input nobody can easily verify), but at least 1. it makes the “trusted inputs → published artifacts” relationship verifiable, which feels huge to me (that’s what reproducible builds are primarily about :) and 2. it clarifies what benefits we would get from enabling independent verification of currently trusted inputs in the future, which un-muddies the water a lot (it’s not very useful to enable such independent verification if we still rely purely on the RM to triage verification results).

At first glance, the cost/benefit seems totally favorable to me. But I’ve dived too much into this right now to have a good perspective, so I think I’ll need to sleep on it, take a step back, and look at the big picture in a few days again :)

What do you think?

#15 Updated by anonym 2017-09-28 18:29:18

  • Target version changed from Tails_3.2 to Tails_3.3

#16 Updated by intrigeri 2017-10-02 12:38:25

  • blocks Feature #12356: Communicate about reproducible builds to users via a blog post added

#17 Updated by anonym 2017-10-17 15:51:55

  • Assignee changed from anonym to intrigeri
  • QA Check changed from Info Needed to Ready for QA

intrigeri wrote:
> Bonus nitpicking: this does not fully achieve “resist attacks against build machines”, as in theory a compromised RM’s machine can replace the reproduced + tested ISO/IUK with other ones. All it takes is to 1. steal the smartcard PIN code while the RM is busy typing that PIN numerous times for signing UDFs, and seize this opportunity to sign other data while the smartcard is plugged; 2. steal the RM’s SSH credentials; 3. upload replacement ISO + IUKs after we’ve reproduced+tested the genuine ones; 4. push replacement IDF and UDFs to Git using the stolen credentials (I doubt we would reliably notice that unless someone carefully verifies what happens in Git in and around the “merge new release to master” commit every time). Granted, that’s quite theoretical but I think we should take highly sophisticated adversaries into account in this context. And granted too, our custom APT repo gives such adversaries much easier (and harder to detect) attack vectors at the moment. I’m not arguing in favour of trying to fix this remaining problem, I just want us to be super clear about what we think we’re achieving here, both in our own minds and in our external communication :)
>
> So to be extra clear, ignoring my nitpicking above, these two documented reasons “Why we want reproducible builds” are not achieved yet:
>
> * “the incentive for an attacker […] to compromise developers themselves, is lowered”
> * “In turn, this avoids the need to trust people (or software) who build the ISO we release, which in turn allows more people to get involved in release management work.”

I more or less reworked to whole text (commit:0ad32beb9ee7422bfde0a513f1cc8af0341ea726), so I now think the two points above are achieved as far as hardware is concerned. I.e. I think the release process now resists an attacker compromising the RMs hardware (modulo it changing any of the trusted inputs, but the diff review should catch things modified in Tails’ Git, and the APT parts are out of scope for now).

> >> But the way I understand this updated doc, both comparing hashes, “immediately compare the ISOs” and deciding whether the non-determinism matters are the RM’s job.
>
> > My idea was actually that everyone involved compares hashes, and in the event of mismatch stays involved in the process, awaiting a plausible explanation from the RM that they can verify (possibly by involving e.g. another tails-rm@ member, that should be up for the task). Any way, let’s forget about this, and indeed make all this the RMs job, that everyone just trusts.
>
> OK, that makes it much clearer (as IMO the proposed doc does not correctly implement the idea you previously had in mind).

With my rewrite, it is now very clear.

> Now, I’d like to propose having a third-party (e.g. another Foundations Team member, i.e. most often myself) check, shortly after the release goes live, that the published ISO image matches both the published tag and what manual testers have tested. […]

I like it, but don’t see why it has to be done by another Foundations Team member, nor why it should be done post release. So I’ve adapted your idea so the tails@ member does it before the release. Did I screw it up?

If you like this approach, I’d like u to test it with me for Tails 3.3.

#18 Updated by intrigeri 2017-10-18 10:50:18

  • Assignee changed from intrigeri to anonym
  • QA Check changed from Ready for QA to Dev Needed

> I more or less reworked to whole text (commit:0ad32beb9ee7422bfde0a513f1cc8af0341ea726), so I now think the two points above are achieved as far as hardware is concerned. I.e. I think the release process now resists an attacker compromising the RMs hardware (modulo it changing any of the trusted inputs, but the diff review should catch things modified in Tails’ Git, and the APT parts are out of scope for now).

>> Now, I’d like to propose having a third-party (e.g. another Foundations Team member, i.e. most often myself) check, shortly after the release goes live, that the published ISO image matches both the published tag and what manual testers have tested. […]

> I like it,

Cool :)

> but don’t see why it has to be done by another Foundations Team member,

Because we need someone who commits to do boring work regularly under tight time constraints. I think the only way to have that is to include it in Core work (it’s almost exactly our working definition of Core work actually), and the simplest way to do that on the short term is to piggy-back on some existing role instead of creating a new one; I happened to pick Foundations Team but feel free to pick another one that fits better if you want, or to propose creating a new dedicated Core work role. I don’t care much, as long as we have good enough means to rely on that commitment.

> nor why it should be done post release.

Well, it’s logically impossible to check “that the published ISO image matches both the published tag and what manual testers have tested” before it is released, isn’t it?

> If you like this approach, I’d like u to test it with me for Tails 3.3.

Nice, but as said above I’d rather not rely on non-formalized commitments for this on the long term. So either make it so the commitment is formalized somewhere, or fallback to the Foundations Team idea.

I’ve had a look at commit:0ad32beb9ee7422bfde0a513f1cc8af0341ea726 and (surprise!) I have a few comments:

  • A compromised RM’s system can still publish a different ISO than the one that has been successfully reproduced by the TR, no? It seems that even with the pre-release “Verify the meta data pointing to the uploaded ISO and IUKs” step, our only protection against this implicitly lies the fact some people will monitor every Git commit on the master branch all the time, which is unreliable (nobody really does that consistently, e.g. I often skip merge commits and you sometimes don’t revert spam when you push new stuff there, which suggests you just did git pull without checking the changes closely; and anyway, it’s on nobody’s job definition to do that currently). Hence the need to do the verification after the release, unless I missed something.
  • I’m worried about adding “Verify the meta data pointing to the uploaded ISO and IUKs” as a blocker in the release process. Historically we RMs have been pretty bad at giving a reliable ETA for such things, so I’m concerned that this adds stress on the TR who is supposed to be available, on short notice, for an unspecified amount of time. I’d rather see this happen post-release, which will relax everyone involved… and also increases the value of the verification, as explained in the previous bullet point.
  • The process depends on the RM explicitly triggering the verification, which can be blocked by hardware/system compromise. I’d rather have something that we know will happen even if the RM does not ask anyone anything (be it because of hardware/system compromise… or more trivially because in the real world, every RM manages to skip/miss/forget at least N% of the release process doc). I believe my proposal (Redmine tickets created in advance) is not affected by this problem, so I don’t understand why we would instead implement a process that is affected.
  • “involve another RM” ← there’s no other RM with time budgeted to do this work (or even awareness they are on-call that day), so I’d rather s/another RM/a Foundations Team member who is not the RM/; and then we need to add this to the Foundations Team role definition because it’s added work/availability.
  • The part about IUKs refers to “solution or explanation the RM presents” but I can’t see where the RM presents any such thing to the TR.
  • go to the "If something seemingly malicious is found" case for the ISO above points to text that got removed
  • typo in “reproducibiliy-followup” and in “release_process#reproducibiliy”
  • typo in “the the”

#19 Updated by anonym 2017-10-23 16:14:00

  • QA Check changed from Dev Needed to Info Needed

intrigeri wrote:
> > I more or less reworked to whole text (commit:0ad32beb9ee7422bfde0a513f1cc8af0341ea726), so I now think the two points above are achieved as far as hardware is concerned. I.e. I think the release process now resists an attacker compromising the RMs hardware (modulo it changing any of the trusted inputs, but the diff review should catch things modified in Tails’ Git, and the APT parts are out of scope for now).
>
> >> Now, I’d like to propose having a third-party (e.g. another Foundations Team member, i.e. most often myself) check, shortly after the release goes live, that the published ISO image matches both the published tag and what manual testers have tested. […]
>
> > I like it,
>
> Cool :)
>
> > but don’t see why it has to be done by another Foundations Team member,
>
> Because we need someone who commits to do boring work regularly under tight time constraints. I think the only way to have that is to include it in Core work (it’s almost exactly our working definition of Core work actually), and the simplest way to do that on the short term is to piggy-back on some existing role instead of creating a new one; I happened to pick Foundations Team but feel free to pick another one that fits better if you want, or to propose creating a new dedicated Core work role. I don’t care much, as long as we have good enough means to rely on that commitment.

Ok. I am actively working against this, i.e. you and me becoming more inter-dependent [especially around release time], which is what your proposal means in practice. I’m also not intrigued at becoming blocked by your slow internet connection. :)

I think what I haven’t managed to articulate yet is that I see the TR’s work as part of QA, and e.g. manual testing is just as affected by what you say, and it seems to work. The release QA has the RM as fallback, which won’t work in this case, but the Foundations Team seems like the only sane fallback, so let’s go with that at least.

> > nor why it should be done post release.
>
> Well, it’s logically impossible to check “that the published ISO image matches both the published tag and what manual testers have tested” before it is released, isn’t it?

The ISO image is generally published (read: uploaded) ~24h before we release (read: announce the release on the website), and the Git tag even before that. So I was referring to the possibility of the check happening during this ~24h window right before the release.

> > If you like this approach, I’d like u to test it with me for Tails 3.3.
>
> Nice, but as said above I’d rather not rely on non-formalized commitments for this on the long term. So either make it so the commitment is formalized somewhere, or fallback to the Foundations Team idea.

> I’ve had a look at commit:0ad32beb9ee7422bfde0a513f1cc8af0341ea726 and (surprise!) I have a few comments:
>
> * A compromised RM’s system can still publish a different ISO than the one that has been successfully reproduced by the TR, no?

I’m not really sure what you mean with “publish” here but I think what you say is trivially true: theoretically, whenever we do the check, a compromised RM can publish a different ISO image just after the check.

> It seems that even with the pre-release “Verify the meta data pointing to the uploaded ISO and IUKs” step, our only protection against this implicitly lies the fact some people will monitor every Git commit on the master branch all the time, which is unreliable (nobody really does that consistently, e.g. I often skip merge commits and you sometimes don’t revert spam when you push new stuff there, which suggests you just did git pull without checking the changes closely; and anyway, it’s on nobody’s job definition to do that currently). Hence the need to do the verification after the release, unless I missed something.

I would argue that only checking post-release simply is too late:

  • if there’s some trivial reproducibility problem, we now lost the chance to cheaply skip the bad release and bump to an “emergency release” with the reproducibility fix. Hopefully this will be a rare occurrence, so the value of this point’s argument is hopefully low.
  • now there’s a window (start: Tails release; stop: post-release check) where we would happily distribute compromised images, because our process detects them late. Our only defense at this point is that Jenkins is not compromised in the same way as the RM’s system. This is very serious, imho.

To me, the second point means we must have a pre-release check (otherwise I really do not understand what we are trying to achieve here). Also doing a single post-release check might add some non-zero value, but I can’t help but feel it is arbitrary: the compromised ISO/IUK uploads or change of their meta data on our website could the go by unnoticed if it happened after that single check. To get anything with real guarantees in this direction we’d need a continuous post-release check, i.e. something that ensures that what we checked with the pre-release check stays true until the release’s EOL (i.e. next release).

> * I’m worried about adding “Verify the meta data pointing to the uploaded ISO and IUKs” as a blocker in the release process. Historically we RMs have been pretty bad at giving a reliable ETA for such things, so I’m concerned that this adds stress on the TR who is supposed to be available, on short notice, for an unspecified amount of time. I’d rather see this happen post-release, which will relax everyone involved…

To me this would be similar to postponing the manual testing after the release. IMHO this check is the same type of a necessary evil as manual testing.

> * The process depends on the RM explicitly triggering the verification, which can be blocked by hardware/system compromise. I’d rather have something that we know will happen even if the RM does not ask anyone anything (be it because of hardware/system compromise… or more trivially because in the real world, every RM manages to skip/miss/forget at least N% of the release process doc). I believe my proposal (Redmine tickets created in advance) is not affected by this problem, so I don’t understand why we would instead implement a process that is affected.

Can you please elaborate on how this is a problem, given that the RM and TR are assumed to work together without malice? And how Redmine tickets are relevant (I’m not against it, it just seems orthogonal).

In the end, I think we need a real time meeting to discuss this. I think we’re working with some slight but important differences among our assumptions and end up talking in circles around each other, but that we actually could easily agree on something sane if we just could understand each other better. What do you think?


> * “involve another RM” ← there’s no other RM with time budgeted to do this work (or even awareness they are on-call that day), so I’d rather s/another RM/a Foundations Team member who is not the RM/; and then we need to add this to the Foundations Team role definition because it’s added work/availability.
> * The part about IUKs refers to “solution or explanation the RM presents” but I can’t see where the RM presents any such thing to the TR.
> * go to the "If something seemingly malicious is found" case for the ISO above points to text that got removed
> * typo in “reproducibiliy-followup” and in “release_process#reproducibiliy”
> * typo in “the the”

I’ll deal with these later.

#20 Updated by anonym 2017-10-23 16:14:15

  • Assignee changed from anonym to intrigeri

#21 Updated by intrigeri 2017-10-29 08:21:04

  • Assignee changed from intrigeri to anonym

Hi!

> In the end, I think we need a real time meeting to discuss this. I think we’re working with some slight but important differences among our assumptions and end up talking in circles around each other, but that we actually could easily agree on something sane if we just could understand each other better. What do you think?

Fully agreed. I’m reassigning to you so you track and organize this.

I’ll reply to some points below anyway but I have little hope it helps much, so let’s discuss this when we meet (i.e. soon! :)

anonym wrote:
> intrigeri wrote:
>> anonym wrote:
>> > but don’t see why it has to be done by another Foundations Team member,
>>
>> Because we need someone who commits to do boring work regularly under tight time constraints. I think the only way to have that is to include it in Core work (it’s almost exactly our working definition of Core work actually), and the simplest way to do that on the short term is to piggy-back on some existing role instead of creating a new one; I happened to pick Foundations Team but feel free to pick another one that fits better if you want, or to propose creating a new dedicated Core work role. I don’t care much, as long as we have good enough means to rely on that commitment.

> Ok. I am actively working against this, i.e. you and me becoming more inter-dependent [especially around release time],

I fully agree.

> which is what your proposal means in practice.

I don’t think so: my proposal implies that the other FT member can do this check at some point after the release. Contrary to what you are proposing, this doesn’t necessarily has to be within a 24 hours window.

> I’m also not intrigued at becoming blocked by your slow internet connection. :)

Sure, let’s not do that.

> I think what I haven’t managed to articulate yet is that I see the TR’s work as part of QA, and e.g. manual testing is just as affected by what you say, and it seems to work. The release QA has the RM as fallback, which won’t work in this case, but the Foundations Team seems like the only sane fallback, so let’s go with that at least.

I think we have a problem here. See below.

>> > nor why it should be done post release.
>>
>> Well, it’s logically impossible to check “that the published ISO image matches both the published tag and what manual testers have tested” before it is released, isn’t it?

> The ISO image is generally published (read: uploaded) ~24h before we release (read: announce the release on the website), and the Git tag even before that. So I was referring to the possibility of the check happening during this ~24h window right before the release.

At first glance I don’t want to be the one committed to do this in this timeframe for two reasons:

  • I’m not always available during this 24h window and pretty often changing this would require me to enter sacrifice mode.
  • Our track record of providing reliable info wrt. when this 24h window starts is pretty bad. Waiting for something like this is the kind of things that kills me. I don’t want to add more of it in my life.

I’m open to discussing this further though :)

> I’m not really sure what you mean with “publish” here but I think what you say is trivially true: theoretically, whenever we do the check, a compromised RM can publish a different ISO image just after the check.

This is correct in theory (and actually correct for any single compromised RM machine, not only the current release’s RM), but in practice it doesn’t work like this: exploiting this weakness requires a RM to plug in their smartcard, which we only do at specific times. Hence my proposal to do the check after the last time when the RM for a given release has plugged their smartcard.

> To me, the second point means we must have a pre-release check (otherwise I really do not understand what we are trying to achieve here).

I’d personally be comfortable enough with relying on Jenkins to do this check. But I’m also fine with having such a check done by someone else during the QA, as you’re proposing; it’s just that IMO it’s not enough to achieve our stated goals, hence my proposal.

> Also doing a single post-release check might add some non-zero value, but I can’t help but feel it is arbitrary: the compromised ISO/IUK uploads or change of their meta data on our website could the go by unnoticed if it happened after that single check.

See above for why I agree with this reasoning in a theoretical world that’s slightly different from the one we live in, but disagree once applied to our actual situation.

> To get anything with real guarantees in this direction we’d need a continuous post-release check, i.e. something that ensures that what we checked with the pre-release check stays true until the release’s EOL (i.e. next release).

I think there’s something worthy in this idea. It can probably be simplified a lot: we could monitor the detached ISO signature and the IDF and notify people when they change. Assuming we would be notified if the ISO didn’t match either of those anymore, this should be enough to detect any “compromised ISO re-published after the reproducibility check” situation. Depending on some important details it could work either for a pre-release check, or for a post-release one, or for both.

>> * The process depends on the RM explicitly triggering the verification, which can be blocked by hardware/system compromise. I’d rather have something that we know will happen even if the RM does not ask anyone anything (be it because of hardware/system compromise… or more trivially because in the real world, every RM manages to skip/miss/forget at least N% of the release process doc).

> Can you please elaborate on how this is a problem, given that the RM and TR are assumed to work together without malice?

I was specifically reasoning about “hardware/system compromise”, not about human malice: a compromised system can block arbitrary outgoing email… e.g. the one that asks the TR to do their job.

>> I believe my proposal (Redmine tickets created in advance) is not affected by this problem, so I don’t understand why we would instead implement a process that is affected.

> And how Redmine tickets are relevant (I’m not against it, it just seems orthogonal).

I think I was wrong: I was assuming that a pre-existing Redmine ticket would resist a compromised RM system. But it doesn’t as such a system can delete tickets in a silent way.

So the only way to protect against such an attack against the currently active RM might be to rely on tasks the TRs add to their personal, local calendar in advance.

#22 Updated by intrigeri 2017-10-29 08:23:22

  • Deliverable for changed from 289 to 301

(This won’t be done by the end of the contract.)

#23 Updated by anonym 2017-11-15 11:30:47

  • Target version changed from Tails_3.3 to Tails_3.5

#24 Updated by intrigeri 2017-11-15 15:14:07

  • blocked by deleted (Feature #12356: Communicate about reproducible builds to users via a blog post)

#25 Updated by intrigeri 2017-12-07 12:49:12

anonym and I will have a Mumble in January before 3.5.

#26 Updated by anonym 2018-01-10 15:24:48

intrigeri wrote:
> anonym and I will have a Mumble in January before 3.5.

Attaching the notes from the meeting! I’ll take care of updating the existing docs and design page accordingly.

#27 Updated by anonym 2018-01-23 19:52:27

  • Target version changed from Tails_3.5 to Tails_3.6

#28 Updated by anonym 2018-02-19 19:07:29

  • Priority changed from Elevated to High

This must be ready in time for bert to follow for Tails 3.6 (ideally, 3.6~rc1).

#29 Updated by anonym 2018-02-27 18:48:14

  • Assignee changed from anonym to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

Ok, finally done in commit:e52e568b5a6136f72c898decddc62357a058ff19.

#30 Updated by intrigeri 2018-02-28 10:26:55

  • Assignee changed from intrigeri to anonym
  • QA Check changed from Ready for QA to Dev Needed

> Ok, finally done in commit:e52e568b5a6136f72c898decddc62357a058ff19.

Great!

I’ve pushed a few fixes and improvements: 69a5745fb9eea6e745bc78c45ef15112fa9c97ad..973f996f1a03c8979b630d61d310b008477cf3fa. May I ask you that in the future, you build locally any doc page you modify/create and check the the result looks OK in a web browser? The broken links (with no useful link text even after fixing the link destination) made it quite obvious you did not. IMO the reviewer should not be the first person to notice such issues.

Everything looks good except one important thing. During our last meeting we finally agreed that “inputs: commit of the tag at the time of the QA tests, hash of the products that were tested” is “sent by testers to the TR”. IIRC this was the main point of disagreement between us before that meeting but during that meeting you eventually agreed that “a compromised RM system can block arbitrary outgoing communication (e.g. email) so it cannot be trusted to initiate a check”, which resulted in us specifying the “sent by testers to the TR” bit. Your implementation feels like a step backwards because of “This section is done by the RM”. An implementation that would work better must:

  • Ensure a TR for version N knows in advance they are the TR for that release. I see that you’ve modified contribute/working_together/roles/release_manager a little bit in a way that goes into the right direction but it’s not sufficient IMO. I’ve done something that feels better to me in commit:de209387e63bd764d73d5fcdbedb72879738185d.
  • Respect the “sent by testers to the TR” specified requirement. I’ll let you handle this part. I think your initial version is really close to that and only needs a few minor adjustments.

Other than that, about the TR doc:

  • I don’t get what the $$ thing means.
  • I think it makes contribute/release_process/test grow too much. I suggest you move the detailed instructions the TR must follow to a sub-page and only keep on contribute/release_process/test the instruction for the testers (that should be “send $INPUT to the TR whose name is in the calendar” and not much more).

#31 Updated by anonym 2018-02-28 16:55:11

  • Assignee changed from anonym to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

intrigeri wrote:
> > Ok, finally done in commit:e52e568b5a6136f72c898decddc62357a058ff19.
>
> Great!
>
> I’ve pushed a few fixes and improvements: 69a5745fb9eea6e745bc78c45ef15112fa9c97ad..973f996f1a03c8979b630d61d310b008477cf3fa. May I ask you that in the future, you build locally any doc page you modify/create and check the the result looks OK in a web browser? The broken links (with no useful link text even after fixing the link destination) made it quite obvious you did not. IMO the reviewer should not be the first person to notice such issues.

Sorry! I always parse the raw markdown only, but I know this is similar to “I carefully read the code” vs actually trying to run it.

> Everything looks good except one important thing. During our last meeting we finally agreed that “inputs: commit of the tag at the time of the QA tests, hash of the products that were tested” is “sent by testers to the TR”. IIRC this was the main point of disagreement between us before that meeting but during that meeting you eventually agreed that “a compromised RM system can block arbitrary outgoing communication (e.g. email) so it cannot be trusted to initiate a check”, which resulted in us specifying the “sent by testers to the TR” bit.

Ah, indeed. I should have written better notes about this, cause this was completely lost on me. Thanks for re-raising it!

> Your implementation feels like a step backwards because of “This section is done by the RM”. An implementation that would work better must:
>
> * Ensure a TR for version N knows in advance they are the TR for that release. I see that you’ve modified contribute/working_together/roles/release_manager a little bit in a way that goes into the right direction but it’s not sufficient IMO. I’ve done something that feels better to me in commit:de209387e63bd764d73d5fcdbedb72879738185d.

Admittedly, while I think it improves the phrasing, I don’t see how it changes anything qualitatively. I’m mentioning this because it makes me wonder if I still am missing something. Do you think I am?

> * Respect the “sent by testers to the TR” specified requirement. I’ll let you handle this part. I think your initial version is really close to that and only needs a few minor adjustments.

I did an attempt with commit:1a5b6025c3fe9052e8a05d1486bea85f2c375665.

> Other than that, about the TR doc:
>
> * I don’t get what the $$ thing means.

Note the “Adjust the "variables" (prefixed with `$$`)”, but that sentence is also messed up. Is it clearer with commit:e6c850e0a99873b659dab8ececdee8a30e782d41 (and later commits)?

> * I think it makes contribute/release_process/test grow too much. I suggest you move the detailed instructions the TR must follow to a sub-page and only keep on contribute/release_process/test the instruction for the testers (that should be “send $INPUT to the TR whose name is in the calendar” and not much more).

Fair enough. I did that (and a lot more automation) in commit:909b0bad94a9b121a2df8927c62527f5d63b0ad8. Now following these instructions by non-RMs is very much more straight-forward.

#32 Updated by intrigeri 2018-02-28 18:01:17

  • Assignee changed from intrigeri to anonym
  • QA Check changed from Ready for QA to Dev Needed

> Sorry! I always parse the raw markdown only, but I know this is similar to “I carefully read the code” vs actually trying to run it.

commit:8e25a69dbf24072e4e74d60e2d86828df7b6c7bf :P

>> Your implementation feels like a step backwards because of “This section is done by the RM”. An implementation that would work better must:
>>
>> * Ensure a TR for version N knows in advance they are the TR for that release. I see that you’ve modified contribute/working_together/roles/release_manager a little bit in a way that goes into the right direction but it’s not sufficient IMO. I’ve done something that feels better to me in commit:de209387e63bd764d73d5fcdbedb72879738185d.

> Admittedly, while I think it improves the phrasing, I don’t see how it changes anything qualitatively.

I agree it’s not a huge change but given the depth of mud we’ve been swimming in on this ticket, adding clarity seems useful. Besides, the previous phrasing:

  • did not use the TR terminology so there was too much room IMO for the RM to believe they’ve found a TR while that person is not 100% clear they’re the TR;
  • did not make it clear that the RM does not merely has to ask whether someone can be the TR: the RM has to ensure there will be one, which is not a one-shot email job.

>> * Respect the “sent by testers to the TR” specified requirement. I’ll let you handle this part. I think your initial version is really close to that and only needs a few minor adjustments.

> I did an attempt with commit:1a5b6025c3fe9052e8a05d1486bea85f2c375665.

You’re trying to use Markdown syntax in a HTML block (*everything*). That does not work as you’ll see after building the website.

I find it debatable that a compromised RM system should be trusted to write the input data that the TR must use for verifying. I suspect you took a bit too literally the example about blocking outgoing email. I would feel more comfortable if a manual tester prepared that email themselves before sending it.

I did not try to decipher the code in test/reproducibility/preparation. When I see \\\$\\{${var}\\} a red light blinks in my mind. This is my last life and I refuse spending any of it on this, sorry (if only it was a one shot, well, perhaps; but it certainly won’t be the last time I have to decipher this).

Other than that, looks good. I won’t spend too much time on it now, I think we’ll learn more by actually having people go through this and ask questions.

I wonder if we need (short!) instructions somewhere for the TR, that the TR should read when they commit to be the TR:

  • they should not trust a call for reproduction sent by the RM
  • they should expect a call for reproduction sent by a manual tester
  • if they don’t receive any email asking them to reproduce, something’s wrong. It’s obvious for the both of us right now but it feels risky to assume it’ll be clear for them

What do you think?

#33 Updated by anonym 2018-03-02 11:15:54

  • Assignee changed from anonym to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

intrigeri wrote:
> > Sorry! I always parse the raw markdown only, but I know this is similar to “I carefully read the code” vs actually trying to run it.
>
> commit:8e25a69dbf24072e4e74d60e2d86828df7b6c7bf :P
>
> >> Your implementation feels like a step backwards because of “This section is done by the RM”. An implementation that would work better must:
> >>
> >> * Ensure a TR for version N knows in advance they are the TR for that release. I see that you’ve modified contribute/working_together/roles/release_manager a little bit in a way that goes into the right direction but it’s not sufficient IMO. I’ve done something that feels better to me in commit:de209387e63bd764d73d5fcdbedb72879738185d.
>
> > Admittedly, while I think it improves the phrasing, I don’t see how it changes anything qualitatively.
>
> I agree it’s not a huge change […]

Then we are on the same page. I was just worrying that there was yet another subtle point I was missing. Thanks!

> >> * Respect the “sent by testers to the TR” specified requirement. I’ll let you handle this part. I think your initial version is really close to that and only needs a few minor adjustments.
>
> > I did an attempt with commit:1a5b6025c3fe9052e8a05d1486bea85f2c375665.
>
> You’re trying to use Markdown syntax in a HTML block (*everything*). That does not work as you’ll see after building the website.

Ah, thanks! The output looked good to me since *everything* is raw markdown, which apparently is what I consider “ready for human consumption”… :P

> I find it debatable that a compromised RM system should be trusted to write the input data that the TR must use for verifying. I suspect you took a bit too literally the example about blocking outgoing email. I would feel more comfortable if a manual tester prepared that email themselves before sending it.

Fair point, but what about the SHAAAA that the RM computed? That’s why I thought the RM had to do the preparation, in the end. But I get it, and agree, and propose this: the RM only generates the (signed!) SHA512SUMS.txt of their products, that’s it. This way, when the TR is done, they have compared both Jenkins’ and the RM’s products against the TR’s products.

> I did not try to decipher the code in test/reproducibility/preparation. When I see \\\$\\{${var}\\} a red light blinks in my mind. This is my last life and I refuse spending any of it on this, sorry (if only it was a one shot, well, perhaps; but it certainly won’t be the last time I have to decipher this).

Again, I’m sorry about that. (This will go away! See below.)

> Other than that, looks good. I won’t spend too much time on it now, I think we’ll learn more by actually having people go through this and ask questions.

Agreed!

> I wonder if we need (short!) instructions somewhere for the TR, that the TR should read when they commit to be the TR:
>
> * they should not trust a call for reproduction sent by the RM
> * they should expect a call for reproduction sent by a manual tester
> * if they don’t receive any email asking them to reproduce, something’s wrong. It’s obvious for the both of us right now but it feels risky to assume it’ll be clear for them
>
> What do you think?

I like it! But I think we should go another step: what is the point of involving some manual tester at all? The TR can often be assumed to be more skilled than your average tester, and the TR already knows when they should start looking if it’s time to reproduce. So, let’s just have the Trusted Verifier do everything themself! This way there will be no email (so the templating shell code will go, yay! :)), just a self-contained document of instructions for the TR.

#34 Updated by anonym 2018-03-02 11:16:04

  • QA Check changed from Ready for QA to Info Needed

#35 Updated by anonym 2018-03-02 15:36:24

  • QA Check changed from Info Needed to Ready for QA

I optimistically implemented what I suggested above:

2a859c5770 Automate more of the Trusted Verifier's work with IDF/UDFs.
3dd795c19c Mention when it's too early to verify IDF/UDFs.
f7e06a4c87 Optimize.
90ac83efc6 Fix indentation.
3f25f226a1 Remove bad `--recursive` passed to wget.
1ae596392f Make the Trusted Verifier operate almost completely independently.
e9124d1483 Release process: be explicit about when to be able to sign with Tails signing key.
a1abba13b2 Fix HTML.

I hope you like it!

#36 Updated by intrigeri 2018-03-06 12:22:26

  • Assignee changed from intrigeri to anonym
  • QA Check changed from Ready for QA to Dev Needed

> anonym wrote:
> intrigeri wrote:

>> I find it debatable that a compromised RM system should be trusted to write the input data that the TR must use for verifying. I suspect you took a bit too literally the example about blocking outgoing email. I would feel more comfortable if a manual tester prepared that email themselves before sending it.

> Fair point, but what about the SHAAAA that the RM computed? That’s why I thought the RM had to do the preparation, in the end. But I get it, and agree, and propose this: the RM only generates the (signed!) SHA512SUMS.txt of their products, that’s it. This way, when the TR is done, they have compared both Jenkins’ and the RM’s products against the TR’s products.

One of our goals is “Ensure what we’ve tested (QA) and reproduced matches what is published”. I think only a manual tester, and not the RM, can tell the TR what’s being tested (without relying on a potentially compromised RM’s system).

>> I wonder if we need (short!) instructions somewhere for the TR, that the TR should read when they commit to be the TR: […] What do you think?

> I like it!

And I like the doc you wrote for the TR!

> But I think we should go another step: what is the point of involving some manual tester at all?

See above.

Other than that:

  • commit:e9124d1483f61a1eb65214fe6994c8a367164590 adds some more mdwn in a HTML block; once again: please build and check it looks OK before submitting for QA. Thank you.
  • I don’t think “make sure the Trusted Verifier is in the list of recipients” is a good idea: it suggests the TR should take into account, or rely on, that email, which defeats the purpose of having a process that is not triggered by the RM. I know that we explicitly ask the TR to do it “from [their] own initiative” but once TRs get used to receiving an email, I suspect some of them (with suboptimal todo list management) will start silently relying on it.
  • You’ve introduced new terminology (Trusted Verifier) but the old one (Trusted Reproducer) is still around. Please choose one and use it consistently.
  • Missing end of sentence or period in “it should list which versions”?

#37 Updated by bertagaz 2018-03-14 11:32:05

  • Target version changed from Tails_3.6 to Tails_3.7

#38 Updated by intrigeri 2018-04-13 12:12:21

anonym will try to get this done during this cycle and if this does not work we’ll assess the situation after the 3.7 release and will see if this works needs to be shared/assigned differently.

#39 Updated by bertagaz 2018-05-10 11:09:01

  • Target version changed from Tails_3.7 to Tails_3.8

#40 Updated by intrigeri 2018-05-25 13:26:48

  • Assignee changed from anonym to intrigeri

#41 Updated by intrigeri 2018-05-26 09:50:25

  • Priority changed from High to Elevated

anonym wrote:
> This must be ready in time for bert to follow for Tails 3.6 (ideally, 3.6~rc1).

All the RMs are now aware that the doc pushed to master is not complete and should not be blindly followed. This was supposed to be done by the end of January and we’re now at the end of May. High priority was meant to help anonym prioritize his work. I’ve taken this over but I’ll take this (a bit) easy and will aim to finish this by the end of August or September => might be postponed to 3.9 or even 3.10 if needed.

#42 Updated by intrigeri 2018-06-19 16:28:47

  • Target version changed from Tails_3.8 to Tails_3.9

#43 Updated by intrigeri 2018-08-15 19:24:38

  • Target version changed from Tails_3.9 to Tails_3.10.1

#44 Updated by intrigeri 2018-10-15 19:23:53

  • Target version changed from Tails_3.10.1 to Tails_3.11
  • % Done changed from 50 to 70
  • QA Check changed from Dev Needed to Ready for QA

I’ve fixed all the issues raised during the above discussion and pushed to master (where the initial WIP had been pushed). Then I took a step back, pretended I’m the TR, and:

  • For the ISO + IDF, the process is relatively straightforward (although the IDF comparison could, and should, be fully automated).
  • For the IUK + UDFs, the process feels too complex, error-prone, and work-hungry. To fix that, much more of the work should be automated, e.g. we’re asking the RM to compare many strings without telling them how. Note that Feature #15287 will force us to streamline the IUK build process, which might incidentally simplify things a bit here. Formally speaking, making IUKs reproducible was not part of our reproducible builds project, so I’ve disabled (commented out when I could, deleted otherwise) the IUK reproducibility verification in commit:aff22ac037159557cddd143ff03fc7b1d76a7758. I’ll file a ticket to track those bits somewhere.

Next step is to have someone be the TR and try all this out. I’ll do this myself for 3.10, at some point during the 3.11 cycle, and we’ll see how it goes in practice.

#45 Updated by intrigeri 2018-10-15 19:24:51

  • related to Feature #16052: Document post-release reproducibility verification for IUKs added

#46 Updated by intrigeri 2018-10-24 17:05:23

  • Target version changed from Tails_3.11 to Tails_3.12

No manual tester did their part of the job for 3.10 so I cannot do mine this time.

#47 Updated by intrigeri 2019-01-09 17:47:32

#48 Updated by intrigeri 2019-01-14 15:22:22

  • Target version changed from Tails_3.12 to Tails_3.13

#49 Updated by intrigeri 2019-01-29 19:16:02

I did receive the needed info for 3.11 but failed to do my part of the work. So I expected to do it for 3.12 but no manual tester sent me the needed info (possibly because I failed to do it last time so they assumed I would fail consistently and then did not bother? I dunno). So during next cycle I’ll try to clarify with the RM and manual testers what’s going on, in the hope that there’s at least one release for which both manual testers and myself do the job, and we test this doc. If this does not work for 3.13, then I’ll need to reconsider whether our expectations on this ticket are realistic.

#50 Updated by intrigeri 2019-01-29 19:16:24

  • Priority changed from Elevated to High
  • QA Check deleted (Ready for QA)
  • Type of work changed from Contributors documentation to Communicate

#51 Updated by intrigeri 2019-02-07 09:38:36

Asked the 3.12 RMs what happened and what I can expect for 3.13.

#52 Updated by intrigeri 2019-02-07 09:38:46

  • Type of work changed from Communicate to Wait

#53 Updated by intrigeri 2019-02-11 16:56:03

intrigeri wrote:
> Asked the 3.12 RMs what happened

This was clarified (the search for a TR was not lead to completion, hence why I never received anything).

I’ve received the TR info about 3.12 and can now proceed with testing the doc.

#54 Updated by intrigeri 2019-02-11 18:46:30

  • Target version changed from Tails_3.13 to Tails_3.14
  • % Done changed from 70 to 80
  • Type of work changed from Wait to Test

I’ve played the TR for 3.12 and it went mostly smoothly, except a few pain points that I’ve fixed via commits that all reference this ticket. I’ll do it again for 3.13 so I QA my fixes myself once, and then we’re good to set this up as an official regular process :)

#55 Updated by anonym 2019-02-14 10:07:12

  • % Done changed from 80 to 90

I was TR for Tails 3.12.1 so I also tested the docs. Except a script fix I have nothing important to contribute:

  • commit:e91f2e8b3c78a05076202b10568e85ea0edec05e
  • commit:a2099c734303be5b844111d91b2d8fc2fbedc540

So it looks great! I let you decide if this is enough to close this ticket or if you want to keep it open until you pass it another time for Tails 3.13.

#56 Updated by intrigeri 2019-02-14 16:01:56

> * commit:e91f2e8b3c78a05076202b10568e85ea0edec05e

Good catch!

> * commit:a2099c734303be5b844111d91b2d8fc2fbedc540

Fine by me. FTR I had left it around because it does not hurt and I wanted to limit the amount of stuff that needs reverting when we tackle the IUKs.

> So it looks great! I let you decide if this is enough to close this ticket or if you want to keep it open until you pass it another time for Tails 3.13.

I’d like to keep it open because the other doc (manual test suite) has been updated as a follow up to me observing the person who did this part, and I’d like the updated version to be tested next time. Also, it’ll inform my evaluation about whether it’s realistic to expect 1. RMs to consistently find a TR; 2. manual testers to do this part of the manual test suite.

#57 Updated by intrigeri 2019-03-11 14:25:13

  • Target version changed from Tails_3.14 to Tails_3.15

I’ll RM 3.13 so I can’t be the TR ⇒ postponing to after 3.14, that kibi will RM.

#58 Updated by intrigeri 2019-03-15 10:06:28

  • Target version changed from Tails_3.15 to Tails_3.14

#59 Updated by Anonymous 2019-03-15 10:13:55

We’d like to retest this once.

#60 Updated by intrigeri 2019-03-20 09:22:00

  • Status changed from In Progress to Resolved
  • Assignee deleted (intrigeri)
  • % Done changed from 90 to 100
  • Type of work changed from Test to Contributors documentation

I’ve been the TR for 3.13 and everything went fine. I pushed 2 commits that make the instructions more straightforward and easier to audit. I think we’re done here!

#61 Updated by intrigeri 2019-05-05 08:23:53

  • Target version changed from Tails_3.14 to Tails_3.13.2

#62 Updated by anonym 2019-05-06 15:03:12

  • Target version changed from Tails_3.13.2 to Tails_3.14

#63 Updated by intrigeri 2019-05-06 18:15:37

  • Target version changed from Tails_3.14 to Tails_3.13.2