Make sure Vidalia always autostarts
In bridge mode, once the Tor bootstrap has finished (or something), Vidalia exits. About at the same time as iceweasel pops up. This should be fixed.
Related to Tails -
#10 Updated by anonym 2014-04-10 12:45:24
- Assignee changed from anonym to intrigeri
- QA Check changed from Dev Needed to Info Needed
> In some corner cases, Vidalia still does not restart properly (same as in 0.22.1, basically).
I haven’t seen any thing like this in 0.23. Thanks to Bug #5394 I’ve tested bridge mode quite extensively, so if this bug.
Without knowing what these “corner cases” are, I’m at loss of how to fix this (well, what to fix). I take it that this is hard to reproduce, that it happens seemingly at random? Is there any way you can elaborate? Does it seem related to how much clock skew there is?
One thought I have is that it could be related with *re*connecting the network; we restart Vidalia via NM hooks, and if some hooks haven’t finished when the network reconnects, the new connection’s hooks are simply queued after the old one’s. I wouldn’t be surprised if this could cause something like this. Having two concurrent connections (wired + Wifi, for example) could probably result in a similar situation. Does any of this ring a bell?
#11 Updated by intrigeri 2014-04-11 09:29:46
- Assignee changed from intrigeri to anonym
- QA Check deleted (
I’m sorry I was not clearer a month ago (when I assumed this would be worked on quickly and fixed for 0.23), and I now don’t remember what exactly happened.
I suspect we have a weird race condition here. Perhaps try with a different number of cores, with a slower or faster Internet connection?
Worst case, well, I guess we’ll ignore this for 1.0 and hope someone is able to file a more detailed bug report some day, that what I was able to provide. Sorry again.
#13 Updated by anonym 2014-04-16 02:07:43
I’ve now seen Vidalia not start automatically after the initial network connection. This was in Tails 0.23 running inside VirtualBox, and the VM had only one virtual NIC configured, and the system time was correct enough for
tordate to do nothing. However, this was not in bridge mode, so I believe this is a more general bug, which perhaps is worsened by bridge mode in case it indeed is more common in that situation.
> I suspect we have a weird race condition here. Perhaps try with a different number of cores, with a slower or faster Internet connection?
This seems like a good suspicion. I was building Tails at the same time, and was at the very CPU heavy squashfs compression stage when Tor was bootstrapping, which seems in line with this.
#14 Updated by anonym 2014-04-24 13:05:56
- Tracker changed from Feature to Bug
- Subject changed from Restart Vidalia in bridge mode too to Make sure Vidalia always autostarts
- Assignee deleted (
- Target version deleted (
- QA Check deleted (
I can make Vidalia fail fairly consistently (say 1/3 of the times) by booting Tails 1.0~rc1 in a CPU-starved VM (Virtualbox with one core capped at 20%). For even better reproducibility one can `cat /dev/urandom > /dev/null` or similar inside the VM. In these cases Vidalia segfaults:
vidalia: segfault at b ip 00000000f6033c6d sp 00000000f28536a4 error 4 in libpthread-2.11.3.so[f602c000+14000]
I’ve also seen Vidalia segfault in the same way but in
However, in the instance of this error that I reported in
Bug #5499#note-13 there’s no report of Vidalia (or any other process for that matter) segfaulting. I know this as I saved a VM snapshot and had a look so if you have ideas of other clues I can look for in this snapshot, please let me know.
I find it notable that it’s only the vidalia process that segfaults in this way when it’s CPU-starved. I wonder if that’s just a coincidence (and hence another way that it may fail) or if something similar actually happened in the
Bug #5499#note-13 instance as well, just that it’s failure mode is silent.
I tried running `ps aux | grep -i vidalia` in several of these situations, and this is what I got:
- It reports nothing before vidalia is started, obviously.
- After Vidalia runs:
root 4278 0.0 0.0 1792 64 ? S 15:00 0:00 /bin/sh /usr/local/sbin/restart-vidalia root 4283 0.0 0.0 5548 1136 ? S 15:00 0:00 sudo -u vidalia lckdo /var/lock/vidalia vidalia -DISPLAY=:0.0 vidalia 4284 0.0 0.0 1580 244 ? S 15:00 0:00 lckdo /var/lock/vidalia vidalia -DISPLAY=:0.0 vidalia 4285 47.1 1.9 104904 39756 ? Sl 15:00 0:25 vidalia -DISPLAY=:0.0 vidalia 4303 0.0 0.0 3332 664 ? S 15:00 0:00 dbus-launch --autolaunch 337262ba2b2bbd8c90321ea700000012 --binary-syntax --close-stderr vidalia 4304 0.0 0.0 2628 644 ? Ss 15:00 0:00 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session vidalia 4306 0.0 0.1 8868 3756 ? S 15:00 0:00 /usr/lib/libgconf2-4/gconfd-2
- All these cases have the same result:
- Starting vidalia, verifying that it runs, then `pkill -SIGKILL vidalia` (half-assed attempt to simulate a crash)
- After vidalia segfaults (e.g. in a CPU-starved VM)
- When vidalia mysteriously doesn’t start (
vidalia 4177 0.0 0.0 3332 668 ? S 01:01 0:00 dbus-launch --autolaunch 53177e0e8de62e49cd27ab5c00000031 --binary-syntax --close-stderr vidalia 4178 0.0 0.0 2628 640 ? Ss 01:01 0:00 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
In other words, in all buggy cases I’ve observed, Vidalia was in fact started by our NM hooks (as shown by the
dbus remnants) but failed either loudly (segfault) or quietly (like
Bug #5499#note-13). The existence of e.g.
/home/vidalia/.vidalia/vidalia.pid (as well as other remnants of a running
vidalia process in the
vidalia user’s home folder) in these latter cases also suggests this.
My guess is that there indeed is a race condition, but that it is in Vidalia. If so I also expect this to be a Heisenbug, so firing up a debugger seems like a waste of time. Perhaps we should just see this as just another reason for us to drop Vidalia due to its lack of maintenance.
#18 Updated by anonym 2014-05-28 05:31:28
So far I haven’t seen the problem in Wheezy. However, trying to starve the CPU generally results in GNOME crashing completely, showing an error (with a “sad” computer) and that basically makes Tails unusable until you reboot (or restart gdm3 in a terminal, if you set an admin password).