Bug #5499
Make sure Vidalia always autostarts
50%
Description
In bridge mode, once the Tor bootstrap has finished (or something), Vidalia exits. About at the same time as iceweasel pops up. This should be fixed.
Subtasks
History
#1 Updated by intrigeri 2013-07-25 07:17:40
- Parent task set to
Feature #5479 - Starter set to No
#2 Updated by intrigeri 2013-07-25 07:24:04
- Parent task changed from
Feature #5479toFeature #5920
#3 Updated by intrigeri 2013-10-12 09:06:15
- Status changed from Confirmed to Resolved
I can’t reproduce it anymore. This must have been fixed somehow :)
#4 Updated by malaparte 2013-10-12 11:22:41
I reproduced the bug on 20.0.0 & double checked. Vidalia exits, Tor functions.
#5 Updated by intrigeri 2013-10-12 11:27:26
- Status changed from Resolved to Confirmed
Reopening, then…
#6 Updated by intrigeri 2013-12-08 14:33:27
- Assignee set to anonym
#7 Updated by intrigeri 2014-03-05 01:34:09
- Status changed from Confirmed to In Progress
- Assignee changed from anonym to sajolida
- % Done changed from 0 to 50
- QA Check set to Ready for QA
Code review passes for me. Reassigning to sajolida for testing.
#8 Updated by intrigeri 2014-03-06 16:26:51
- Assignee changed from sajolida to anonym
- QA Check changed from Ready for QA to Dev Needed
In some corner cases, Vidalia still does not restart properly (same as in 0.22.1, basically).
#9 Updated by intrigeri 2014-04-06 18:50:37
- Subject changed from restart Vidalia in bridge mode too to Restart Vidalia in bridge mode too
#10 Updated by anonym 2014-04-10 12:45:24
- Assignee changed from anonym to intrigeri
- QA Check changed from Dev Needed to Info Needed
intrigeri wrote:
> In some corner cases, Vidalia still does not restart properly (same as in 0.22.1, basically).
I haven’t seen any thing like this in 0.23. Thanks to Bug #5394 I’ve tested bridge mode quite extensively, so if this bug.
Without knowing what these “corner cases” are, I’m at loss of how to fix this (well, what to fix). I take it that this is hard to reproduce, that it happens seemingly at random? Is there any way you can elaborate? Does it seem related to how much clock skew there is?
One thought I have is that it could be related with *re*connecting the network; we restart Vidalia via NM hooks, and if some hooks haven’t finished when the network reconnects, the new connection’s hooks are simply queued after the old one’s. I wouldn’t be surprised if this could cause something like this. Having two concurrent connections (wired + Wifi, for example) could probably result in a similar situation. Does any of this ring a bell?
#11 Updated by intrigeri 2014-04-11 09:29:46
- Assignee changed from intrigeri to anonym
- QA Check deleted (
Info Needed)
I’m sorry I was not clearer a month ago (when I assumed this would be worked on quickly and fixed for 0.23), and I now don’t remember what exactly happened.
I suspect we have a weird race condition here. Perhaps try with a different number of cores, with a slower or faster Internet connection?
Worst case, well, I guess we’ll ignore this for 1.0 and hope someone is able to file a more detailed bug report some day, that what I was able to provide. Sorry again.
#12 Updated by intrigeri 2014-04-15 14:50:38
- QA Check set to Info Needed
#13 Updated by anonym 2014-04-16 02:07:43
I’ve now seen Vidalia not start automatically after the initial network connection. This was in Tails 0.23 running inside VirtualBox, and the VM had only one virtual NIC configured, and the system time was correct enough for tordate
to do nothing. However, this was not in bridge mode, so I believe this is a more general bug, which perhaps is worsened by bridge mode in case it indeed is more common in that situation.
intrigeri wrote:
> I suspect we have a weird race condition here. Perhaps try with a different number of cores, with a slower or faster Internet connection?
This seems like a good suspicion. I was building Tails at the same time, and was at the very CPU heavy squashfs compression stage when Tor was bootstrapping, which seems in line with this.
#14 Updated by anonym 2014-04-24 13:05:56
- Tracker changed from Feature to Bug
- Subject changed from Restart Vidalia in bridge mode too to Make sure Vidalia always autostarts
- Assignee deleted (
anonym) - Target version deleted (
Tails_1.0) - QA Check deleted (
Info Needed)
I can make Vidalia fail fairly consistently (say 1/3 of the times) by booting Tails 1.0~rc1 in a CPU-starved VM (Virtualbox with one core capped at 20%). For even better reproducibility one can `cat /dev/urandom > /dev/null` or similar inside the VM. In these cases Vidalia segfaults:
vidalia[4515]: segfault at b ip 00000000f6033c6d sp 00000000f28536a4 error 4 in libpthread-2.11.3.so[f602c000+14000]
I’ve also seen Vidalia segfault in the same way but in libQtCore.so.4.6.3
.
However, in the instance of this error that I reported in Bug #5499#note-13 there’s no report of Vidalia (or any other process for that matter) segfaulting. I know this as I saved a VM snapshot and had a look so if you have ideas of other clues I can look for in this snapshot, please let me know.
I find it notable that it’s only the vidalia process that segfaults in this way when it’s CPU-starved. I wonder if that’s just a coincidence (and hence another way that it may fail) or if something similar actually happened in the Bug #5499#note-13 instance as well, just that it’s failure mode is silent.
I tried running `ps aux | grep -i vidalia` in several of these situations, and this is what I got:
- It reports nothing before vidalia is started, obviously.
- After Vidalia runs:
root 4278 0.0 0.0 1792 64 ? S 15:00 0:00 /bin/sh /usr/local/sbin/restart-vidalia
root 4283 0.0 0.0 5548 1136 ? S 15:00 0:00 sudo -u vidalia lckdo /var/lock/vidalia vidalia -DISPLAY=:0.0
vidalia 4284 0.0 0.0 1580 244 ? S 15:00 0:00 lckdo /var/lock/vidalia vidalia -DISPLAY=:0.0
vidalia 4285 47.1 1.9 104904 39756 ? Sl 15:00 0:25 vidalia -DISPLAY=:0.0
vidalia 4303 0.0 0.0 3332 664 ? S 15:00 0:00 dbus-launch --autolaunch 337262ba2b2bbd8c90321ea700000012 --binary-syntax --close-stderr
vidalia 4304 0.0 0.0 2628 644 ? Ss 15:00 0:00 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
vidalia 4306 0.0 0.1 8868 3756 ? S 15:00 0:00 /usr/lib/libgconf2-4/gconfd-2
- All these cases have the same result:
- Starting vidalia, verifying that it runs, then `pkill -SIGKILL vidalia` (half-assed attempt to simulate a crash)
- After vidalia segfaults (e.g. in a CPU-starved VM)
- When vidalia mysteriously doesn’t start (
Bug #5499#note-13)
vidalia 4177 0.0 0.0 3332 668 ? S 01:01 0:00 dbus-launch --autolaunch 53177e0e8de62e49cd27ab5c00000031 --binary-syntax --close-stderr
vidalia 4178 0.0 0.0 2628 640 ? Ss 01:01 0:00 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
In other words, in all buggy cases I’ve observed, Vidalia was in fact started by our NM hooks (as shown by the dbus
remnants) but failed either loudly (segfault) or quietly (like Bug #5499#note-13). The existence of e.g. /home/vidalia/.vidalia/vidalia.pid
(as well as other remnants of a running vidalia
process in the vidalia
user’s home folder) in these latter cases also suggests this.
My guess is that there indeed is a race condition, but that it is in Vidalia. If so I also expect this to be a Heisenbug, so firing up a debugger seems like a waste of time. Perhaps we should just see this as just another reason for us to drop Vidalia due to its lack of maintenance.
#15 Updated by intrigeri 2014-04-26 08:31:53
- Assignee set to anonym
- QA Check set to Info Needed
anonym, can you reproduce this on a build from the devel branch? Who knows, perhaps Wheezy’s Qt fixes this.
#16 Updated by intrigeri 2014-04-26 08:32:35
- related to
Feature #6841: Replace Vidalia added
#17 Updated by BitingBird 2014-05-27 11:16:31
- Target version set to Tails_1.1
The parent ticket is marked for 1.1, so I set the same version
#18 Updated by anonym 2014-05-28 05:31:28
So far I haven’t seen the problem in Wheezy. However, trying to starve the CPU generally results in GNOME crashing completely, showing an error (with a “sad” computer) and that basically makes Tails unusable until you reboot (or restart gdm3 in a terminal, if you set an admin password).
#19 Updated by intrigeri 2014-06-10 10:49:06
- Assignee deleted (
anonym) - Target version deleted (
Tails_1.1) - Parent task deleted (
)Feature #5920
(Changes decided with my co-RM, anonym.)
#20 Updated by BitingBird 2015-01-02 23:22:44
- Status changed from In Progress to Rejected
Seems like Wheezy fixed this, so closing.