Bug #9632

restart-vidalia appears to occasionally fail to actually restart Vidalia

Added by kytv 2015-06-20 03:51:17 . Updated 2016-04-13 05:27:13 .

Status:
Rejected
Priority:
Normal
Assignee:
Category:
Test suite
Target version:
Start date:
2015-06-20
Due date:
% Done:

0%

Feature Branch:
Type of work:
Code
Blueprint:

Starter:
0
Affected tool:
Deliverable for:

Description

While working on Feature #9518 (in which I’m requesting Tor to use a new circuit in vidalia), my tests failed because vidalia was not running.

restart-vidalia was run and should have started vidalia, but for whatever reason it was not running.

spawning as root: restart-vidalia
spawn returned: [0, "", ""]
calling as amnesia: gpg --batch --keyserver-options import-clean --recv-key '10CC5BC7'
call returned: [2, "gpgkeys: HTTP fetch error 35: gnutls_handshake() failed: A TLS fatal alert has been received.\n", "gpg: requesting key 0x10CC5BC7 from hkps server hkps.pool.sks-keyservers.net\ngpg: no valid OpenPGP data found.\ngpg: Total number processed: 0\n"]
Forcing new Tor circuit...
calling as root: pidof -x -o '%PPID' vidalia
call returned: [1, "", ""]

But why wasn’t it running?


Subtasks


Related issues

Related to Tails - Feature #9518: Retry with new OpenPGP key server pool member when they misbehave Resolved 2015-06-02
Related to Tails - Feature #9517: Retry connecting to OFTC when it fails Resolved 2015-06-02

History

#1 Updated by kytv 2015-06-20 03:59:09

  • related to Feature #9518: Retry with new OpenPGP key server pool member when they misbehave added

#2 Updated by kytv 2015-06-20 03:59:21

  • related to Feature #9517: Retry connecting to OFTC when it fails added

#3 Updated by kytv 2015-06-20 04:00:39

  • Status changed from New to Confirmed

#4 Updated by kytv 2015-06-20 09:13:47

  • Status changed from Confirmed to New

Maybe “Confirmed” was premature. Perhaps I wasn’t waiting long enough for it to start up.

Edit: No. It just happened that during this (separate) vidalia test I wasn’t waiting long enough. The description in this ticket shows I certainly had waiting long enough. If we’re fetching keys already, vidalia should be running.

#5 Updated by kytv 2015-06-20 09:16:36

  • Status changed from New to Confirmed

#6 Updated by kytv 2015-08-04 04:01:21

  • Target version changed from Tails_1.5 to Tails_1.6

#7 Updated by kytv 2015-09-06 05:07:03

  • Subject changed from Vidalia not running after restart-vidalia was run to vm.spawn appears occasionally fail to execute commands
  • Assignee changed from kytv to anonym
  • QA Check set to Info Needed
  • Starter set to No
  • Affected tool deleted (Vidalia)

AFAICT it’s a general problem with @vm.spawn. I’ve seen this while working on Feature #6306.

spawning as root: /usr/local/sbin/tails-i2p start
spawn returned: [0, "", ""]
spawning as root: restart-vidalia
spawn returned: [0, "", ""]
    Given Tails has booted from DVD with I2P enabled and logged in and the network is connected   # features/step_definitions/snapshots.rb:208
calling as root: . /usr/local/lib/tails-shell-library/i2p.sh; i2p_router_console_is_ready
call returned: [1, "", ""]
calling as root: . /usr/local/lib/tails-shell-library/i2p.sh; i2p_router_console_is_ready
[....]
    And the I2P router console is ready                                                           # features/step_definitions/i2p.rb:25
      try_for() timeout expired (Timeout::Error)
      ./features/support/helpers/misc_helpers.rb:74:in `rescue in try_for'
      ./features/support/helpers/misc_helpers.rb:24:in `try_for'
      ./features/step_definitions/i2p.rb:26:in `/^the I2P router console is ready$/'
      features/i2p.feature:21:in `And the I2P router console is ready'

The scenario failed because I2P wasn’t running.

I do not know how to debug this.

Any ideas anonym?

#8 Updated by kytv 2015-09-06 05:18:26

I can offer that when it fails there’s no evidence that the command we want to run is actually run. I’ve come to this conclusion by looking at the /var/log/i2p/wrapper.log in the VM and seeing that the bottom of the log file shows evidence of my stopping I2P and the test suite debug output shows the spawn, but that’s it.

My code snippet:

  if $vm.has_network?
    if $vm.execute("service tor status").success?
      $vm.execute("service tor stop")
      $vm.execute('/usr/local/sbin/tails-i2p stop')
      $vm.execute("rm -f /var/log/tor/log")
      $vm.execute("killall vidalia")
      $vm.host_to_guest_time_sync
      $vm.spawn("restart-tor")
      wait_until_tor_is_working
      if $vm.file_content('/proc/cmdline').include?(' i2p')
        $vm.spawn('/usr/local/sbin/tails-i2p start')
      end

(Still working on figuring this out but I’m not feeling terribly confident that I will find what the problem is.)

The easiest way to reproduce what I’m seeing would probably be to run some network-enabled scenario which uses snapshots (old or Feature #6094 style), then check whether vidalia is running.

(Of course, there’s also the possibility that I’m completely wrong and maybe it’s just Vidalia and the i2p spawning problem is something else. I’m working on proving or disproving that now.)

#9 Updated by anonym 2015-09-07 03:48:36

  • Assignee changed from anonym to kytv
  • QA Check changed from Info Needed to Dev Needed

It may be interesting to see the terminal output of the remote shell server (config/chroot_local-includes/usr/local/lib/tails-autotest-remote-shell). Such logging can probably be implemented easily in its init script (config/chroot_local-includes/etc/init.d/tails-autotest-remote-shell) or in the server python script itself if python has some nice facility for redirecting stdout/stderr to a file. Then build an image with that, run it in a custom .feature that spams a scenario where you do VM.spawn and then tries to verify that the spawned process indeed start, and if not, fetch the remote shell server log and dump it STDERR. Feel free to pusblish the branch and I may be able to help.

> (Of course, there’s also the possibility that I’m completely wrong and maybe it’s just Vidalia and the i2p spawning problem is something else. I’m working on proving or disproving that now.)

Maybe. We’ve had issues with out restart-vidalia wrapper before, IIRC.

#10 Updated by kytv 2015-09-22 03:48:35

  • Subject changed from vm.spawn appears occasionally fail to execute commands to vm.spawn appears to occasionally fail to execute commands
  • Target version changed from Tails_1.6 to Tails_1.7

#11 Updated by kytv 2015-11-04 10:46:05

  • Target version changed from Tails_1.7 to Tails_1.8

#12 Updated by kytv 2015-11-09 03:22:10

I’m currently running a test scenario to try to prove (or disprove) $vm.spawn or restart-vidalia being problematic.

#13 Updated by kytv 2015-11-09 14:02:06

  • Subject changed from vm.spawn appears to occasionally fail to execute commands to restart-vidalia appears to occasionally fail to actually restart-vidalia

I created a test scenario which did the following:

  • add strategically placed touch /tmp/somefilename to various parts of /usr/local/sbin/restart-vidalia
  • run restart-vidalia using $vm.spawn
  • check for the existence of /tmp/the-files-I-created

The scenario was copied so that it appeared in the feature file 500 times.

  Background:
    Given I have started Tails from DVD and logged in and the network is connected

  Scenario: VM Spawn works
    When I restart vidalia
    Then process "vidalia" is running within 15 seconds

and

When /^I restart vidalia$/ do
  $vm.execute_successfully("rm -f /tmp/restart-vidalia*")
  $vm.spawn('/usr/local/sbin/restart-vidalia')
  step 'the file "/tmp/restart-vidalia1" exists after at most 60 seconds'
  step 'the file "/tmp/restart-vidalia2" exists after at most 60 seconds'
  step 'the file "/tmp/restart-vidalia3" exists after at most 60 seconds'
end

After 500 runs there was one failure. restart-vidalia was clearly run since the files existed at /tmp/restart-vidalia*, but vidalia was not running.

This doesn’t necessarily rule out a problem with something other than restart-vidalia but it seems to rule out $vm.spawn as being a problem.

A question that still remains: why did I see something similar with tails-i2p and other applications started with $vm.spawn?

#14 Updated by kytv 2015-11-26 07:55:31

  • Target version changed from Tails_1.8 to 246

#15 Updated by sajolida 2015-11-27 04:44:12

  • Target version changed from 246 to Tails_2.0

#16 Updated by kytv 2015-11-28 18:23:18

  • Target version changed from Tails_2.0 to Tails_2.2

Definitely won’t be investigating this before 2.0’s freeze.

#17 Updated by intrigeri 2015-11-30 02:07:37

> Target version changed from Tails_2.0 to Tails_2.2

> Definitely won’t be investigating this before 2.0’s freeze.

Note that the freeze on 2015-12-11 is a feature freeze, so per-se it is not a blocker to address this problem :)

#18 Updated by intrigeri 2015-12-07 14:00:25

  • Subject changed from restart-vidalia appears to occasionally fail to actually restart-vidalia to restart-vidalia appears to occasionally fail to actually restart Vidalia

#19 Updated by anonym 2016-03-29 18:10:30

  • Target version changed from Tails_2.2 to Tails_2.3

#20 Updated by anonym 2016-04-13 05:27:13

  • Status changed from Confirmed to Rejected
  • Assignee deleted (kytv)
  • QA Check deleted (Dev Needed)

We do not ship Vidalia any more => rejecting.