Bug #9654
"IPv4 TCP non-Tor Internet hosts were contacted" during the test suite
100%
Description
Working on Feature #9518 I ran into the following:
Full network capture available at: /home/kytv/git/tails/tmp/torified_gnupg_sniffer.pcap-2015-06-21T03:54:00+00:00
^[[31m The following IPv4 TCP non-Tor Internet hosts were contacted:^[[0m
^[[31m 93.104.209.61 (RuntimeError)^[[0m
^[[31m /home/kytv/git/tails/features/support/helpers/firewall_helper.rb:115:in `assert_no_leaks'^[[0m
^[[31m /home/kytv/git/tails/features/support/hooks.rb:152:in `After'^[[0m
Scenario failed at time 01:25:32
Host is listed here
Subtasks
Related issues
Related to Tails - |
Resolved | 2015-02-26 | |
Related to Tails - |
Rejected | 2015-07-29 | |
Blocked by Tails - |
Resolved | 2016-04-15 |
History
#1 Updated by kytv 2015-06-29 07:29:55
- Target version set to Tails_1.5
#2 Updated by intrigeri 2015-06-30 10:00:39
How can Tor possibly connect to a relay that’s not in the consensus we use for this test? Perhaps this test isn’t looking at (all?) the files that Tor actually uses?
#3 Updated by intrigeri 2015-07-01 11:12:18
This ticket needs an assignee.
#4 Updated by anonym 2015-07-02 07:38:51
intrigeri wrote:
> How can Tor possibly connect to a relay that’s not in the consensus we use for this test? Perhaps this test isn’t looking at (all?) the files that Tor actually uses?
Seems like an instance of Bug #8961. Hmm. Perhaps something that changed semi-recent in tor
makes this relevant, after all.
#5 Updated by anonym 2015-07-02 07:41:24
Perhaps we should change strategy: instead of getting the consensus from the system under test, perhaps we should fetch it using some python script based on stem
? I’m not sure if that would make things more robust, or less so.
#6 Updated by intrigeri 2015-07-07 08:09:59
> Perhaps we should change strategy: instead of getting the consensus from the system under test, perhaps we should fetch it using some python script based on stem
? I’m not sure if that would make things more robust, or less so.
Doesn’t stem need a running tor with an up-to-date consensus? If so, which tor do you want to query, if not the one from the system under test? (By the way, querying the system under test’s running tor with stem could be an improvement against the current version of things — we’re already trusting that tor’s knowledge of the Tor network).
#7 Updated by intrigeri 2015-08-03 05:14:27
- related to
Bug #9812: "IPv4 non-TCP Internet hosts were contacted" during the test suite added
#8 Updated by intrigeri 2015-08-03 12:23:40
- Status changed from New to Confirmed
- Assignee set to kytv
- Target version changed from Tails_1.5 to Tails_1.6
anonym, kytv: please find an assignee and set a suitable milestone.
#9 Updated by kytv 2015-09-17 09:37:25
- Assignee changed from kytv to anonym
- QA Check set to Info Needed
Have any ideas as to how to tackle this? (I have 0 familiarity with stem
).
#10 Updated by intrigeri 2015-09-17 13:56:03
> (I have 0 familiarity with stem
).
FYI I don’t think that’s related to stem.
#11 Updated by anonym 2015-09-17 15:48:47
intrigeri wrote:
> > Perhaps we should change strategy: instead of getting the consensus from the system under test, perhaps we should fetch it using some python script based on stem
? I’m not sure if that would make things more robust, or less so.
>
> Doesn’t stem need a running tor with an up-to-date consensus?
For directory fetches, I think not: https://stem.torproject.org/api/descriptor/remote.html
#12 Updated by anonym 2015-09-17 15:51:24
anonym wrote:
> Perhaps we should change strategy: instead of getting the consensus from the system under test, perhaps we should fetch it using some python script based on stem
? I’m not sure if that would make things more robust, or less so.
I guess we could get the most stable situation by using stem to check if the results we get are false positives.
#13 Updated by anonym 2015-09-17 15:53:48
- Assignee changed from anonym to kytv
- Target version changed from Tails_1.6 to Tails_1.7
Would you like to give this a try for Tails 1.7? IIRC you’ve said that you know python well, so making a helper stem-based script (that we can call from inside Ruby) shouldn’t be very hard. stem has excellent docs!
#14 Updated by kytv 2015-09-25 15:31:02
- QA Check changed from Info Needed to Dev Needed
anonym wrote:
> Would you like to give this a try for Tails 1.7? IIRC you’ve said that you know python well, so making a helper stem-based script (that we can call from inside Ruby) shouldn’t be very hard. stem has excellent docs!
No, I know “a bit” of Python but maybe I can make this happen. At the very least I can try.
#15 Updated by kytv 2015-11-04 10:46:06
- Target version changed from Tails_1.7 to Tails_1.8
#16 Updated by anonym 2015-11-06 11:11:59
- related to
Bug #8961: The automated test suite doesn't fetch Tor relays from unverified-microdesc-consensus.bak added
#17 Updated by anonym 2015-11-06 11:18:09
- related to deleted (
)Bug #9812: "IPv4 non-TCP Internet hosts were contacted" during the test suite
#18 Updated by anonym 2015-11-06 11:18:15
- related to deleted (
)Bug #8961: The automated test suite doesn't fetch Tor relays from unverified-microdesc-consensus.bak
#19 Updated by anonym 2015-11-06 11:18:23
- Parent task set to Bug #10288
#20 Updated by anonym 2015-11-06 11:18:48
- related to
Bug #8961: The automated test suite doesn't fetch Tor relays from unverified-microdesc-consensus.bak added
#21 Updated by anonym 2015-11-06 11:19:05
- related to
Bug #9812: "IPv4 non-TCP Internet hosts were contacted" during the test suite added
#22 Updated by anonym 2015-11-06 11:40:12
- Deliverable for set to 270
#23 Updated by anonym 2015-11-06 11:50:38
- Assignee changed from kytv to anonym
#24 Updated by anonym 2015-11-06 12:13:19
- Target version changed from Tails_1.8 to 246
#25 Updated by kytv 2015-11-16 07:59:28
I’m seeing this very, very frequently. :(
The following IPv4 TCP non-Tor Internet hosts were contacted:
82.71.246.79 (RuntimeError)
Or maybe that’s a consequence of re-using a several day old snapshot. Hmm…
#26 Updated by kytv 2015-11-16 08:02:52
kytv wrote:
> I’m seeing this very, very frequently. :(
>
> […]
>
> Or maybe that’s a consequence of re-using a several day old snapshot. Hmm…
I trashed all of my *.memstate
files to test this theory. (It may be evident that I have no idea how the consensus works)
#27 Updated by sajolida 2015-11-27 04:44:09
- Target version changed from 246 to Tails_2.0
#28 Updated by intrigeri 2015-12-01 05:43:11
Just seen that with 109.230.231.166 that is currently https://globe.torproject.org/#/relay/F03A37ADE9366BC5A5899DD7BB0B06AF2CB0B952 (which Globe reports as down since 2hours 42minutes).
#29 Updated by anonym 2016-01-06 14:07:37
- Target version changed from Tails_2.0 to Tails_2.2
#30 Updated by anonym 2016-02-12 14:13:58
- Category set to Test suite
Setting this category, since I do not think it matters otherwise. I mean, Tor apparently happily uses these routers. Could be a Tor bug, but likely it’s just that Tor doesn’t keep the state files up-to-date. Perhaps shutting Tor down (or HUP:ing?) before looking at the consensus will flush it?
#31 Updated by anonym 2016-02-20 14:17:50
- Priority changed from Normal to Elevated
- Target version changed from Tails_2.2 to Tails_2.3
Will look at this at the same time as Bug #8961 and Bug #10238.
#32 Updated by anonym 2016-03-21 09:38:44
https://lists.torproject.org/pipermail/tor-dev/2016-March/010588.html
> > - moria1 (source 128.31.0.39 vs. consensus 128.31.0.34)
> > - longclaw (source 199.254.238.52 vs. consensus 199.254.238.53)
I.e. these two authorities are should be a problem for us since they’re listed with the IP address from the sources, and not the actual one. Hm. I find it worrying that we don’t see these two fail the test suite regularly.
#33 Updated by anonym 2016-04-22 14:32:10
Once we use Chutney (Feature #9521) I think this is solved.
But Bug #9654#note-32 is still worrying me a bit.
#34 Updated by anonym 2016-04-26 09:36:35
- Target version changed from Tails_2.3 to Tails_2.4
#35 Updated by anonym 2016-05-13 07:25:36
- Status changed from Confirmed to In Progress
- % Done changed from 0 to 50
- Feature Branch set to test/9521-chutney
- Type of work changed from Code to Wait
anonym wrote:
> Once we use Chutney (Feature #9521) I think this is solved.
I’m quite convinced about this, so I’ll go with it.
#36 Updated by anonym 2016-05-13 07:26:03
- blocked by
Feature #9521: Use the chutney Tor network simulator in our test suite added
#37 Updated by intrigeri 2016-05-18 15:18:28
I guess next step is to create a branch that unmarks this test as fragile, and see how it fares in a few days on Jenkins.
#38 Updated by anonym 2016-06-02 18:23:08
- Status changed from In Progress to Fix committed
- Assignee deleted (
anonym) - % Done changed from 50 to 100
- QA Check changed from Dev Needed to Pass
intrigeri wrote:
> I guess next step is to create a branch that unmarks this test as fragile, and see how it fares in a few days on Jenkins.
I’m truly convinced that Chutney (Feature #9521) solved this, so I’m confident that we can just close this. I’ll reopen it if I see it reappear.
#39 Updated by anonym 2016-06-08 01:30:26
- Status changed from Fix committed to Resolved