Bug #16747
"Scenario failed" logged when output from next scenario has already started
100%
Description
On Jenkins there consistently is the “failure” below but only on the feature/buster
branch (hence the initial Target version). On my system the “failure” isn’t there at all.
# Depends on scenario: Writing files to a read/write-enabled persistent partition with the old Tails USB installation
Scenario: Upgrading an old Tails USB installation from a Tails DVD # features/usb_upgrade.feature:63
02:43:59.823080619: Remote shell: calling as root: echo 'hello?'
02:44:00.264660432: call returned: [0, "hello?\n", ""]
02:44:00.265304453: Sikuli: calling click(L(1023,384)@S(0)[0,0 1024x768])...
02:44:00.989188836: [log] CLICK on L(1023,384)@S(0)[0,0 1024x768]
02:44:00.989393736: Remote shell: calling as root: nmcli device show eth0
02:44:01.339875899: call returned: [0, "GENERAL.DEVICE: eth0\nGENERAL.TYPE: ethernet\nGENERAL.HWADDR: 50:54:00:C7:61:83\nGENERAL.MTU: 1500\nGENERAL.STATE: 20 (unavailable)\nGENERAL.CONNECTION: --\nGENERAL.CON-PATH: --\nWIRED-PROPERTIES.CARRIER: off\n", ""]
02:44:01.357742786: Remote shell: calling as root: date -s '@1558444689'
02:44:01.517745227: call returned: [0, "Tue 21 May 2019 01:18:09 PM UTC\n", ""]
Given I have started Tails from DVD without network and logged in # features/step_definitions/snapshots.rb:170
Scenario failed at time 02:43:37
Screenshot: https://jenkins.tails.boum.org/job/test_Tails_ISO_feature-buster/181/artifact/build-artifacts/02:43:37_Writing_files_to_a_read_write-enabled_persistent_partition_with_the_old_Tails_USB_installation.png
Systemd journal: https://jenkins.tails.boum.org/job/test_Tails_ISO_feature-buster/181/artifact/build-artifacts/02:43:37_Writing_files_to_a_read_write-enabled_persistent_partition_with_the_old_Tails_USB_installation.journal
Video: https://jenkins.tails.boum.org/job/test_Tails_ISO_feature-buster/181/artifact/build-artifacts/02:43:37_Writing_files_to_a_read_write-enabled_persistent_partition_with_the_old_Tails_USB_installation.mkv
And I clone USB drive "old" to a new USB drive "to_upgrade" # features/step_definitions/usb.rb:104
02:44:33.274244187: Remote shell: calling as root: test -b /dev/sda
02:44:33.889081260: call returned: [1, "", ""]
02:44:34.918959149: Remote shell: calling as root: test -b /dev/sda
02:44:35.139195484: call returned: [0, "", ""]
And I plug USB drive "to_upgrade" # features/step_definitions/common_steps.rb:66
02:44:35.141717832: Remote shell: calling as root: pidof -x -o '%PPID' gnome-terminal-server
[...]
Above I’m saying “failure” in quotes because while we can see that there is an exception raised, the scenario continues to run and actually reaches the end successfully. In the end the scenario is marked as a success.
The severity of this isn’t clear to me. The only clue we have about the exception is that it occurred at features/step_definitions/snapshots.rb:170
, so in the generated snapshot step.
Subtasks
History
#1 Updated by anonym 2019-05-23 12:48:33
- Status changed from Confirmed to In Progress
Applied in changeset commit:tails|54a8f7d57480d2c40664adbe9e8cbfdbbd3794af.
#2 Updated by anonym 2019-06-17 09:08:50
- Status changed from In Progress to Resolved
- % Done changed from 0 to 100
This was caused by commit:933483e0089b825138121a309c3c1e932d9341f8 because we run a too old cucumber on Jenkins (Bug #10068). Fixed with commit:eccc25460099522ad26d33c93f3d70601ae63ee9.
#3 Updated by anonym 2019-06-17 09:12:36
- Subject changed from Spurious exception thrown in snapshot step to "Scenario failed" logged when output from next scenario has already started