Bug #11890
Checking credentials in Thunderbird autoconfig wizard sometimes fails in the test suite
30%
Description
I can’t attach the video (https://jenkins.tails.boum.org/view/RM/job/test_Tails_ISO_stable/511/artifact/build-artifacts/02%3A30%3A31_Icedove_can_send_emails,_and_receive_emails_over_POP3.mkv) since it’s above the 5MB limit.
Files
Subtasks
Related issues
Related to Tails - |
Resolved | 2013-09-26 | |
Related to Tails - Feature #12277: Run our own email (IMAP/POP3/SMTP) server for automated tests run on lizard | Needs Validation | 2017-03-02 |
History
#1 Updated by intrigeri 2016-10-31 08:42:03
- related to
Feature #6304: Automate the most important bits of the Icedove tests added
#2 Updated by intrigeri 2016-10-31 08:43:19
- Parent task set to Bug #10288
#3 Updated by bertagaz 2016-11-01 14:23:36
Another instance here: https://jenkins.tails.boum.org/job/test_Tails_ISO_devel/587/
#4 Updated by anonym 2016-12-05 20:19:55
- Priority changed from Normal to Elevated
#5 Updated by anonym 2016-12-05 20:20:34
- Priority changed from Elevated to Normal
#6 Updated by anonym 2016-12-05 20:21:41
- Target version changed from Tails_2.9.1 to Tails 2.10
#7 Updated by intrigeri 2016-12-05 20:22:10
Depending on the November false positives triaging, we’ll mark as fragile or not, bump prio or not, postpone or not.
#8 Updated by intrigeri 2016-12-20 10:38:53
This was one of the most common failures on stable and devel in Oct+Nov, so I’m going to mark the test as fragile. But it affects only icedove.feature
so I’m not bumping priority.
#9 Updated by anonym 2017-01-21 13:41:01
- Assignee changed from anonym to intrigeri
I am afraid this failure actually happens when the Riseup mail servers (which we use) are down. As a regular Riseup mail user, I can attest to that it’s not that uncommon. :/
I tried to manually reproduce this by killing the network (but not Tor). Then each retry took over a minute before the error happened again, unlike in the test suite failure video above, where the error is detected almost immediately. The only way I could achieve the exact same behavior (i.e. fast error detection) was by clicking on “Manual config” and then change “Server address” to something that doesn’t listen on the target port (I used x.org for which port 993 is closed and not “filtered” (according to nmap) — when the port is filtered, it takes a long time, which makes sense). So, it seems it’s not the fault of Tor, but the server.
It would seem that another plausible alternative to the Riseup servers being down is that we get a bad DNS record for *.riseup.net (perhaps because of a bad exit?), and Tor caches it for the duration of the test without fetching a new one. But since we use Chutney I don’t think this should happen.
So my working theory is that the servers are down. To verify we’d either need a detailed (per minute?) and accurate log of Riseup’s servers’ availability so we can compare, or perhaps switch to another mail provider (we could even try with switching only half of the isotesters to a new one, for better comparison). What do you think, tails-sysadmin@?
#10 Updated by anonym 2017-01-21 15:28:21
Note to self: I just had an issue where my system’s Icedove could not connect to imap.riseup.net
when fetching email, no matter how much I requested new circuits and clicked “Retry” in the error dialog (presumably the same code is run in Icedove in this different case). I verified that the host was up and listening to TCP port 993. After restarting Tor, it immediately worked again. Could this mean something? Maybe this invalidates my experience that the servers are down occasionally, and that it actually is some Tor vs Icedove bug?
#11 Updated by intrigeri 2017-01-23 07:26:46
- Target version changed from Tails 2.10 to Tails_2.11
#12 Updated by intrigeri 2017-01-23 07:27:19
- QA Check set to Info Needed
#13 Updated by anonym 2017-01-23 14:09:36
anonym wrote:
> Note to self: I just had an issue where my system’s Icedove could connect to imap.riseup.net
when fetching email, no matter how much I requested new circuits and clicked “Retry” in the error dialog (presumably the same code is run in Icedove in this different case). I verified that the host was up and listening to TCP port 993. After restarting Tor, it immediately worked again. Could this mean something? Maybe this invalidates my experience that the servers are down occasionally, and that it actually is some Tor vs Icedove bug?
Related: It happened again. This time I retried a few times, with the error poping up again pretty much immediately. I did signal newnym
but the problem persisted. However, if I cancelled and just pressed the “Get Messages” button, then it worked fine suddenly. Was that first “attempt” simply cursed to never succeed no matter how many retries? Had it gotten into this bad state thanks to Tor, somehow?
Next time I should try to “Cancel” and “Get Messages” without a signal newnym
, to try to rule out or implicate Tor’s role.
#14 Updated by intrigeri 2017-01-25 09:23:09
anonym wrote:
> Note to self: I just had an issue where my system’s Icedove could connect to imap.riseup.net
when fetching email, no matter how much I requested new circuits and clicked “Retry” in the error dialog
Did you mean “could not connect”?
Also, is this because the connection is kept open, so the retries still use the same circuit?
#15 Updated by intrigeri 2017-01-25 09:27:21
- Assignee changed from intrigeri to anonym
anonym wrote:
> So my working theory is that the servers are down. To verify we’d either need a detailed (per minute?) and accurate log of Riseup’s servers’ availability so we can compare,
I’ve looked at their munin and its resolution is not good enough to provide such data: I never see less than 30-100 IMAP logins. I don’t think we can easily get the data you want without bothering Riseup people.
> What do you think, tails-sysadmin@?
What would be the requirements for a (private) IMAP server we would run ourselves? (Just like we already do for the SSH tests.)
#16 Updated by anonym 2017-01-25 11:16:16
intrigeri wrote:
> anonym wrote:
> > Note to self: I just had an issue where my system’s Icedove could connect to imap.riseup.net
when fetching email, no matter how much I requested new circuits and clicked “Retry” in the error dialog
>
> Did you mean “could not connect”?
Yes! I’ll edit the post.
> Also, is this because the connection is kept open, so the retries still use the same circuit?
I guess — if the connection is kept open, the circuit won’t be closed and will in fact be reused. OTOH, since we use Chutney in the test suite, the test suite host is always the exit. That invalidates my theory about picking an exit node for which riseup.net is censored. I’m wondering if there’s some worse bug in Thunderbird, where it can get into a state where it will always fail.
#17 Updated by anonym 2017-01-25 11:27:45
- Assignee changed from anonym to intrigeri
intrigeri wrote:
> anonym wrote:
> > So my working theory is that the servers are down. To verify we’d either need a detailed (per minute?) and accurate log of Riseup’s servers’ availability so we can compare,
>
> I’ve looked at their munin and its resolution is not good enough to provide such data: I never see less than 30-100 IMAP logins. I don’t think we can easily get the data you want without bothering Riseup people.
ACK, I thought so.
> > What do you think, tails-sysadmin@?
>
> What would be the requirements for a (private) IMAP server we would run ourselves? (Just like we already do for the SSH tests.)
It would need to do IMAP/POP3/SMTP, whose ports (which should be the standard ones!) must be exposed to the Internet. SMTP should be locked down so only mail to itself can be delivered. It’d be extra cool if it could implement a daily cleanup of the inbox, so mails older than 24 h are deleted — that way we can drop the cleanup code from the test suite (and start doing the test for POP again).
A problem, though, is that since this account’s credentials would be public, any one can login and try to subvert our test results. E.g. fill the inbox (DoS), mess with the emails we use as verification (forcing false positives or false negatives), and so on. A solution would be if the server would accept any credentials and create the account on the fly (so we’d use random ones), and then clean it up after 24 hours or so. Sounds like a fun sysadmin task, but perhaps too time-consuming?
#18 Updated by intrigeri 2017-01-25 13:28:12
- Assignee changed from intrigeri to anonym
>> What would be the requirements for a (private) IMAP server we would run ourselves? (Just like we already do for the SSH tests.)
> It would need to do IMAP/POP3/SMTP, whose ports (which should be the standard ones!) must be exposed to the Internet.
Actually, we can probably configure it so that isotesters can reach it, and nobody outside of lizard can. With some simple DNS + firewall tricks, this should work nicely, and we could even have an autoconfig XML file served by our webserver for Thunderbird to fetch (which is how it does it currently with Riseup, I guess).
> SMTP should be locked down so only mail to itself can be delivered.
Quite easy (but not required I think, unless I missed something).
> It’d be extra cool if it could implement a daily cleanup of the inbox, so mails older than 24 h are deleted — that way we can drop the cleanup code from the test suite (and start doing the test for POP again).
I think that dovecot has facilities to do that out of the box.
> A problem, though, is that since this account’s credentials would be public,
Why? We support configuring arbitrary accounts, right? So the way I see it, the email account for isotesters would live in the (private) isotesters secrets repo, and nobody else would be able to access and use them. E.g. you and I would keep using whatever other account we currently use.
#19 Updated by anonym 2017-01-25 13:43:58
- Assignee changed from anonym to intrigeri
intrigeri wrote:
> >> What would be the requirements for a (private) IMAP server we would run ourselves? (Just like we already do for the SSH tests.)
>
> > It would need to do IMAP/POP3/SMTP, whose ports (which should be the standard ones!) must be exposed to the Internet.
>
> Actually, we can probably configure it so that isotesters can reach it, and nobody outside of lizard can. With some simple DNS + firewall tricks, this should work nicely, and we could even have an autoconfig XML file served by our webserver for Thunderbird to fetch (which is how it does it currently with Riseup, I guess).
Ok. I had different assumptions (see below).
[…]
> > A problem, though, is that since this account’s credentials would be public,
>
> Why? We support configuring arbitrary accounts, right? So the way I see it, the email account for isotesters would live in the (private) isotesters secrets repo, and nobody else would be able to access and use them. E.g. you and I would keep using whatever other account we currently use.
When you said “Just like we already do for the SSH tests” I thought of the “unsafe SSH key” thing we use for the Git tests, so we wouldn’t need any secrets at all for Icedove. But ok, since doing that would be hard, let’s forget about it.
If this is easy enough to do, perhaps we can try it. However, we have not ruled out that Icedove simply is buggy when retrying, and if that is the case doing this would be pointless. What do you think?
(In the meantime, I’m awaiting Icedove to misbehave for me again.)
#20 Updated by intrigeri 2017-01-26 08:43:10
- Assignee changed from intrigeri to anonym
> Ok. I had different assumptions (see below).
… and I was mis-remembering how we’re doing it :)
> If this is easy enough to do, perhaps we can try it. However, we have not ruled out that Icedove simply is buggy when retrying, and if that is the case doing this would be pointless. What do you think?
Very roughly, setting up the needed infra would take 2-4 hours. I’ll let you make the call about if/when it’s worth going that way.
#21 Updated by anonym 2017-02-18 18:58:33
When debugging Icedove these are some useful prefs to put in .icedove/profile.default/preferences/0000tails.js
:
pref("browser.dom.window.dump.enabled", true);
pref("mailnews.database.global.logging.dump", true);
pref("mail.wizard.logging.dump", "All");
#22 Updated by anonym 2017-02-20 12:54:15
- Assignee changed from anonym to intrigeri
In my quest to try to understand what causes this error I have tried to simulate a “network error” (broadly speaking) like this:
- Boot Tails with network
- Start Icedove, fill in a
riseup.net address + password # When it presents the IMAP + SMTP configuration, then I unplug the network while
tor@ is still running. [I also tried toDROP
all Tor traffic via the firewall, in casetor
would behave differently vs clients when circuits just don’t work, compared to when the network is just down — no difference.] - Then I press “Done” so it will try the password
- After two minutes it will show the failure
- Pressing “Done” again (to test again) results in the same error, after exactly two mintes
The last two points differ from the video; then the first error happens after 44 seconds, and then each retry fails after 3 seconds (except the second retry, which for some reason takes 7 seconds). I guess the fact that we don’t got the full two minute timeout could mean that we actually had a connection, but the server closed it, probably by shutting down completely, which also would explain why the following connections fail more or less immediately. OTOH, we are forcing a new Tor circuit between each retry, so I find it suspicious that each retry fails after a more or less fixed time — I’d expect much more variance with this many new circuits.
By the way, note that Tor rate limits NEWNYM to one per 10 seconds, so some of our NEWNYM:s actually do nothing since they happen too often in this retry loop, but at least three of our NEWNYM:s are effective, so I do not think this is a problem. Just in case I took the liberty of fixing it in our base branches (commit:3bc868968f47c414bdb3e32d09df675fea5766b8) after testing it thoroughly.
So where are we now? Well, even if nothing is certain, my experiment + Occam’s razor seems to indicate that the server we are using is the problem, not Tor. Do you agree? If so I think it’s worth potentially wasting sysadmin time by setting up our own locked down IMAP/POP3/SMTP server.
#23 Updated by intrigeri 2017-03-02 08:26:44
- blocked by Feature #12277: Run our own email (IMAP/POP3/SMTP) server for automated tests run on lizard added
#24 Updated by intrigeri 2017-03-02 08:27:00
- Assignee changed from intrigeri to anonym
- QA Check changed from Info Needed to Dev Needed
> So where are we now? Well, even if nothing is certain, my experiment + Occam’s razor seems to indicate that the server we are using is the problem, not Tor. Do you agree?
ACK.
> If so I think it’s worth potentially wasting sysadmin time by setting up our own locked down IMAP/POP3/SMTP server.
So this is now Feature #12277.
#25 Updated by anonym 2017-03-02 10:05:56
- Target version changed from Tails_2.11 to Tails_3.2
Bumping to same release as blocker.
#26 Updated by anonym 2017-06-29 13:33:11
- blocks
Feature #13239: Core work 2017Q3: Test suite maintenance added
#27 Updated by intrigeri 2017-09-07 06:35:24
- Target version changed from Tails_3.2 to Tails_3.3
#28 Updated by intrigeri 2017-10-01 09:57:40
- blocks
Feature #13240: Core work 2017Q4: Test suite maintenance added
#29 Updated by intrigeri 2017-10-01 09:57:44
- blocked by deleted (
)Feature #13239: Core work 2017Q3: Test suite maintenance
#30 Updated by intrigeri 2017-11-10 14:16:30
- Target version changed from Tails_3.3 to Tails_3.5
#31 Updated by intrigeri 2017-12-07 12:44:17
- Target version deleted (
Tails_3.5)
#32 Updated by intrigeri 2017-12-10 08:04:36
- Feature Branch set to test/11890-local-email-server
#33 Updated by intrigeri 2017-12-10 08:06:00
- Status changed from Confirmed to In Progress
Applied in changeset commit:e3ff659847d37a46126587fed4e41286af2c9055.
#34 Updated by intrigeri 2017-12-11 10:47:53
- % Done changed from 0 to 10
Summing up the current status of Feature #12277:
- Every Jenkins isotester of ours now runs a local email server. The FQDN of each isotester (e.g.
isotester3.lizard
) resolves to the IP address where the corresponding email services (Postfix, Dovecot, nginx that serves the autoconfig file) listen. wrap_test_suite
patches thethunderbird.yml
config to use said local email server i.e. address=test@FQDN (e.g. test@isotester1.lizard), password=test.
So the next step is to try using this in the test suite. The feature branch re-enables the Thunderbird tests and should be used to make progress on this front. If everything goes well (famous last words), the only missing bit is importing the snakeoil certificate of the host system (isotester) into Thunderbird, which can be done this way: start Thunderbird, cancel the wizard, close Thunderbird => cert8.db
is created; then use certutil -A
(https://access.redhat.com/documentation/en-US/Red_Hat_Certificate_System/8.0/html/Admin_Guide/Managing_the_Certificate_Database.html#Installing_Certificates_Using_certutil) to add the snakeoil certificate (/etc/ssl/certs/ssl-cert-snakeoil.pem
on the host system) to Thunderbird’s NSS database; then you can run the actual tests.
If this displays any problem in the underlying infra, please report back on Feature #12277.
#35 Updated by intrigeri 2017-12-11 10:48:07
- blocks deleted (
Feature #12277: Run our own email (IMAP/POP3/SMTP) server for automated tests run on lizard)
#36 Updated by intrigeri 2017-12-11 10:48:14
- related to Feature #12277: Run our own email (IMAP/POP3/SMTP) server for automated tests run on lizard added
#37 Updated by anonym 2017-12-11 16:28:59
- Assignee changed from anonym to intrigeri
- % Done changed from 10 to 30
- QA Check changed from Dev Needed to Ready for QA
intrigeri wrote:
> Summing up the current status of Feature #12277:
>
> * Every Jenkins isotester of ours now runs a local email server. The FQDN of each isotester (e.g. isotester3.lizard
) resolves to the IP address where the corresponding email services (Postfix, Dovecot, nginx that serves the autoconfig file) listen.
Cool!
Without spending more than two minutes on this (if you cannot give a meaningful answer under those constraints, ignore this), how reusable could this be if all test suite instances run local email services (think: Feature #9519)?
> * wrap_test_suite
patches the thunderbird.yml
config to use said local email server i.e. address=test@FQDN (e.g. test@isotester1.lizard), password=test.
Ok. The last run of this branch on Jenkins shows that the old (riseup) configuration was still used, but presumably your infra stuff was not pushed yet at the time. The new commit I just pushed should let me know if there actually is an issue here.
> So the next step is to try using this in the test suite. The feature branch re-enables the Thunderbird tests and should be used to make progress on this front. If everything goes well (famous last words), the only missing bit is importing the snakeoil certificate of the host system (isotester) into Thunderbird, which can be done this way: start Thunderbird, cancel the wizard, close Thunderbird => cert8.db
is created; then use certutil -A
(https://access.redhat.com/documentation/en-US/Red_Hat_Certificate_System/8.0/html/Admin_Guide/Managing_the_Certificate_Database.html#Installing_Certificates_Using_certutil) to add the snakeoil certificate (/etc/ssl/certs/ssl-cert-snakeoil.pem
on the host system) to Thunderbird’s NSS database; then you can run the actual tests.
I picked a different solution, see commit:239b2bc2859e7167e11e0f7db31756a939baf246, which I prefer since it doesn’t require the extra restart of Thunderbird.
> If this displays any problem in the underlying infra, please report back on Feature #12277.
Ack!
#38 Updated by intrigeri 2017-12-13 13:12:29
- Assignee changed from intrigeri to anonym
- QA Check changed from Ready for QA to Dev Needed
> Without spending more than two minutes on this (if you cannot give a meaningful answer under those constraints, ignore this), how reusable could this be if all test suite instances run local email services (think: Feature #9519)?
Not very reusable as I’m using system services, and in the only non-Jenkins use case for this I can think of (development laptop), one certainly does not want to fiddle with the developer’s own Postfix/Dovecot/nginx config. That’s why Feature #9519 is actually hard, and mostly likely its benefit is not worth its cost while so far we’ve managed to find ad-hoc solutions for most issues it would supposedly solve (assuming it does not cause other, new issues).
> Ok. The last run of this branch on Jenkins shows that the old (riseup) configuration was still used, but presumably your infra stuff was not pushed yet at the time. The new commit I just pushed should let me know if there actually is an issue here.
https://jenkins.tails.boum.org/view/Tails_ISO/job/test_Tails_ISO_test-11890-local-email-server/5/artifact/build-artifacts/02%3A42%3A09_Thunderbird_can_send_emails,_and_receive_emails_over_IMAP.png suggests the config is now correctly patched, but something else goes wrong. I don’t know what. Possible causes include:
- importing the cert is done wrong (e.g. if you’re importing it as a CA, I don’t know if that will work because I think it’s self-signed, but I did not check myself)
- firewall or guest <-> host networking issue
> I picked a different solution, see commit:239b2bc2859e7167e11e0f7db31756a939baf246, which I prefer since it doesn’t require the extra restart of Thunderbird.
Looks good to me modulo “respecitve” typo.
#39 Updated by intrigeri 2017-12-13 15:34:39
intrigeri wrote:
> * firewall or guest <-> host networking issue
I think we can dismiss this as I see connection attempts to Dovecot and Postfix on the host:
Dec 13 15:17:03 isotester1 postfix-TailsToaster/smtpd[8943]: connect from isotester1.sib[192.168.123.16]
Dec 13 15:17:03 isotester1 postfix/submission/smtpd[8940]: disconnect from isotester1.sib[192.168.123.16] ehlo=1 quit=1 commands=2
Dec 13 15:17:03 isotester1 dovecot[677]: pop3-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=192.168.123.16, lip=192.168.123.16, secured, session=<SeuEQjpgJoTAqHsQ>
Dec 13 15:17:06 isotester1 dovecot[677]: pop3-login: Disconnected (no auth attempts in 1 secs): user=<>, rip=192.168.123.16, lip=192.168.123.16, TLS: Disconnected, session=<B/+kQjpgiITAqHsQ>
… I think that’s the autoconfig wizard probing stuff.
But I also see lots of such messages:
Dec 13 15:15:03 isotester1 dovecot[677]: imap-login: Disconnected (no auth attempts in 0 secs): user=<>, rip=192.168.123.16, lip=192.168.123.16, TLS: SSL_read() failed: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca: SSL alert number 48, session=<vcdWOzpggr3AqHsQ>
… which might be caused by a Dovecot configuration problem (although I’m pretty sure I did try to connect to that test Dovecot with imaps already). If needed I can try to correlate this with the debug.log
timestamps once my test suite run is completed.
#40 Updated by anonym 2017-12-20 15:48:08
intrigeri wrote:
> > Without spending more than two minutes on this (if you cannot give a meaningful answer under those constraints, ignore this), how reusable could this be if all test suite instances run local email services (think: Feature #9519)?
>
> Not very reusable as I’m using system services, and in the only non-Jenkins use case for this I can think of (development laptop), one certainly does not want to fiddle with the developer’s own Postfix/Dovecot/nginx config. That’s why Feature #9519 is actually hard
:/
> and mostly likely its benefit is not worth its cost while so far we’ve managed to find ad-hoc solutions for most issues it would supposedly solve (assuming it does not cause other, new issues).
The fact that the sysadmin team has to be involved in the development of such tests, and that they have to maintain these services “forever” is enough to convince me that Feature #9519 is worth it. Besides, this approach only helps Jenkins — you, me, sib, etc will still be annoyed by failures in our local runs. But I certainly would agree that it is not a priority, and probably won’t be for some time.
> > Ok. The last run of this branch on Jenkins shows that the old (riseup) configuration was still used, but presumably your infra stuff was not pushed yet at the time. The new commit I just pushed should let me know if there actually is an issue here.
>
> https://jenkins.tails.boum.org/view/Tails_ISO/job/test_Tails_ISO_test-11890-local-email-server/5/artifact/build-artifacts/02%3A42%3A09_Thunderbird_can_send_emails,_and_receive_emails_over_IMAP.png suggests the config is now correctly patched, but something else goes wrong. I don’t know what. Possible causes include:
>
> * importing the cert is done wrong (e.g. if you’re importing it as a CA, I don’t know if that will work because I think it’s self-signed, but I did not check myself)
I pushed commit:412598da97fc12de8bd724c204bb05ed13450c64 and will monitor if it did the trick or not.
(Reference for the flags: https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/tools/NSS_Tools_certutil)
> > I picked a different solution, see commit:239b2bc2859e7167e11e0f7db31756a939baf246, which I prefer since it doesn’t require the extra restart of Thunderbird.
>
> Looks good to me modulo “respecitve” typo.
Fixed!
#41 Updated by intrigeri 2017-12-24 11:10:17
> Besides, this approach only helps Jenkins — you, me, sib, etc will still be annoyed by failures in our local runs.
I’m not going to discuss Feature #9519 here further since we at least agree that “it is not a priority, and probably won’t be for some time”.
>> * importing the cert is done wrong (e.g. if you’re importing it as a CA, I don’t know if that will work because I think it’s self-signed, but I did not check myself)
> I pushed commit:412598da97fc12de8bd724c204bb05ed13450c64 and will monitor if it did the trick or not.
Great!
#42 Updated by anonym 2017-12-28 00:15:20
- Assignee changed from anonym to intrigeri
- QA Check changed from Dev Needed to Info Needed
intrigeri wrote:
> >> * importing the cert is done wrong (e.g. if you’re importing it as a CA, I don’t know if that will work because I think it’s self-signed, but I did not check myself)
>
> > I pushed commit:412598da97fc12de8bd724c204bb05ed13450c64 and will monitor if it did the trick or not.
>
> Great!
Well, it didn’t work. I suspect it’ll be easiest if I can debug with an SSH tunnel, but it seems to be disabled on at least isotester1
. Could we temporarily set
AllowTCPForwarding yes
PermitOpen any
for sshd
so I can tunnel in?
#43 Updated by intrigeri 2018-01-01 16:59:58
- blocked by deleted (
)Feature #13240: Core work 2017Q4: Test suite maintenance
#44 Updated by intrigeri 2018-01-01 17:00:04
- blocks
Feature #13241: Core work: Test suite maintenance added
#45 Updated by intrigeri 2018-01-03 11:46:22
- Assignee changed from intrigeri to anonym
- QA Check changed from Info Needed to Dev Needed
> Well, it didn’t work. I suspect it’ll be easiest if I can debug with an SSH tunnel, but it seems to be disabled on at least isotester1
. Could we temporarily set
>
> AllowTCPForwarding yes
> PermitOpen any
>
> for sshd
so I can tunnel in?
Done for AllowTCPForwarding. I don’t see why PermitOpen any
would be needed so I did not touch it.
#46 Updated by intrigeri 2018-04-08 14:01:30
- blocked by deleted (
)Feature #13241: Core work: Test suite maintenance
#47 Updated by intrigeri 2018-09-14 11:23:56
- Target version set to 2020
#48 Updated by intrigeri 2019-03-14 13:36:28
- Deliverable for deleted (
SponsorS_Internal)
Will be done as part of core work: it’s on our 2020 roadmap.
#49 Updated by intrigeri 2019-08-10 16:09:43
#50 Updated by intrigeri 2019-08-30 12:07:11
- Subject changed from Checking credentials in Icedove autoconfig wizard sometimes fails in the test suite to Checking credentials in Thunderbird autoconfig wizard sometimes fails in the test suite
This problem never happened in the last 2 months on Jenkins, while:
- I’ve been using
+force-all-tests
for most of my branches. - During the last 3 weeks, we’ve been running the full test suite, including these scenarios, on all our base branches (stable, testing, devel, feature/tor-nightly-master).
So I suspect that there were robustness/reliability improvements somewhere, e.g. either at Riseup or in Thunderbird. Therefore, on my upcoming test/update-fragile-tags
branch, I’ll remove the fragile tags for these scenarios. We can re-add them if it turns out they’re still really fragile. And we’ll more info that will help us (re)?prioritize this ticket.