Bug #10476

Jenkins workspace cleanup misses Sikuli's .vlog.png files

Added by bertagaz 2015-11-04 04:28:43 . Updated 2015-12-07 05:31:28 .

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Continuous Integration
Target version:
Start date:
2015-11-04
Due date:
% Done:

100%

Feature Branch:
puppet-tails:master
Type of work:
Sysadmin
Blueprint:

Starter:
0
Affected tool:
Deliverable for:
267

Description

Sometimes after a test job run, the workspace on the isotesters isn’t empty as it should be with the Workspace Cleanup Jenkins plugin we’re using. It happens that one can find quite a few (but not always the same number) of files in the form of:

2625-00-Input.vlog.png
2613-01-Canny.vlog.png
2566-03-LongLinesFound.vlog.png
2560-03-LongLinesFound.vlog.png
2587-09-lineblobs-filtered.vlog.png
2603-00-Input.vlog.png
2613-09-lineblobs-filtered.vlog.png
2619-04-LongLinesRemoved.vlog.png
2622-06-blobs-extracted.vlog.png
2557-09-lineblobs-filtered.vlog.png
2610-03-LongLinesFound.vlog.png
2608-04-LongLinesRemoved.vlog.png
2556-00-Input.vlog.png
2626-06-blobs-extracted.vlog.png
2596-09-lineblobs-filtered.vlog.png
2565-06-blobs-extracted.vlog.png
2610-05-NonEdgeRemoved.vlog.png
2577-06-blobs-extracted.vlog.png

This is a bit annoying, because after a while it tends to grow and take too much room in the partition, and make test jobs failing because they can’t fetch the ISO they need.

This seems related to Sikuli, which creates that kind of files in it’s logging code

It’s difficult to get when this happens, but could be when a test job is aborted and sikuli is killed maybe.

I guess a good quick workaround would be to set up a cronjob that deletes once in a while that kind of files.


Subtasks


History

#1 Updated by bertagaz 2015-11-04 04:29:46

Assigning to anonym to get his opinion about this sikuli thing, and if this thing can be worked around in a better way directly in the test suite.

Feel free to re-assign to me after that. :)

#2 Updated by bertagaz 2015-11-04 04:30:12

#3 Updated by bertagaz 2015-11-04 06:26:19

  • % Done changed from 0 to 20
  • Feature Branch set to puppet-tails:bugfix/10467-remove-sikuli-leftovers

bertagaz wrote:
> I guess a good quick workaround would be to set up a cronjob that deletes once in a while that kind of files.

Meanwhile I’ve pushed a cronjob for that in the feature branch. Not yet applied.

#4 Updated by intrigeri 2015-11-05 02:55:21

  • blocks #8668 added

#5 Updated by intrigeri 2015-11-05 02:57:06

> Sometimes after a test job run, the workspace on the isotesters isn’t empty as it should be with the Workspace Cleanup Jenkins plugin we’re using.

I’m not sure I understand. Isn’t it precisely the Workspace Cleanup plugin’s job to deal with whatever leftovers jobs may be leaving in the workspace directories?

#6 Updated by anonym 2015-11-05 03:57:38

bertagaz wrote:
> This seems related to Sikuli, which creates that kind of files in it’s logging code

I think these are created when Sikuli’s (almost worthless) OCR engine runs.

> It’s difficult to get when this happens, but could be when a test job is aborted and sikuli is killed maybe.

One thing in sikuli that is pretty insane is that methods like wait(string) have the string overloaded with two meanings:

1. the path to an image

2. a text to search for with OCR

AFAICT, if 1 fails, Sikuli will proceed with 2. Yay. So this seems to indicate that some image that we’re looking for is missing, cause we are not intentionally using OCR anywhere.

Any way, why not just cleanup these files after each run? In fact, shouldn’t the whole work space be cleared?

#7 Updated by bertagaz 2015-11-05 04:20:57

intrigeri wrote:
> > Sometimes after a test job run, the workspace on the isotesters isn’t empty as it should be with the Workspace Cleanup Jenkins plugin we’re using.
>
> I’m not sure I understand. Isn’t it precisely the Workspace Cleanup plugin’s job to deal with whatever leftovers jobs may be leaving in the workspace directories?

Yes, and it works but for this files.

I’ve looked a bit this morning, the jobs that contained that kind of files were the one that were aborted. The Workspace Cleanup step is still executed in this case, but maybe it’s related to the way the test suite run is killed by Jenkins and the plugin. Like Sikuli would be killed in a weird way and would dump this files after the workspace cleanup plugin step was executed?

anonym wrote:
> I think these are created when Sikuli’s (almost worthless) OCR engine runs.

Well, I’ve never seen that kind of files when running the test suite, even in $TMP_DIR or /tmp/sikuli_cache.

> Any way, why not just cleanup these files after each run? In fact, shouldn’t the whole work space be cleared?

They are, even when the build is aborted, all other files and directories in the workspace are cleaned. Only this one remain, that’s why it’s a bit weird and I tend to think something bad happens later in the aborted test job run, after the workspace cleanup step is executed, which is the last step of a test job. So it’s too late then for us to remove anything at that time.

I was wondering if maybe there was an option in Sikuli to disable the use of this files? Im a bit lost in its code (see pointer in the description). If not, the only workaround I see is the cronjob option.

#8 Updated by intrigeri 2015-11-05 05:34:21

>> I think these are created when Sikuli’s (almost worthless) OCR engine runs.

> Well, I’ve never seen that kind of files when running the test suite, even in $TMP_DIR or /tmp/sikuli_cache.

I’ve seen them regularly (I think when developing/updating tests with --retry-find).

#9 Updated by intrigeri 2015-11-06 07:36:05

  • Deliverable for changed from 268 to 267

#11 Updated by bertagaz 2015-11-12 04:28:07

intrigeri wrote:
> I’ve seen them regularly (I think when developing/updating tests with --retry-find).

Interesting. Well it confirms that Sikuli does dump this kind of files sometimes. I guess they are left when the build is aborted in the middle of a Sikuli OCR stuff.

So unless there’s an option in Sikuli to specify where to put this files (like in $TEMP_DIR), I think we don’t have much choices but to periodically clean this files.

The cronjob is already ready in the feature branch.

#12 Updated by intrigeri 2015-11-16 04:35:15

>> Any way, why not just cleanup these files after each run? In fact, shouldn’t the whole work space be cleared?

> They are, even when the build is aborted, all other files and directories in the workspace are cleaned. Only this one remain, that’s why it’s a bit weird and I tend to think something bad happens later in the aborted test job run, after the workspace cleanup step is executed, which is the last step of a test job. So it’s too late then for us to remove anything at that time.

We would see exactly the same symptoms if the workspace cleanup step was run before the Sikuli process had fully terminated, and after the cleanup is done some signal handler finishes its job by saving these files. It would be interesting to know how it’s killed/terminated in the cases when those .vlog.png files are left over.

Can we insert some step (or whatever it’s called in Jenkins -speak) that always kills (SIGKILL) the process Sikuli runs in, after we consider the test suite run as finished, and before we run any follow-up (e.g. ws cleanup) step?

#13 Updated by intrigeri 2015-11-16 04:40:10

  • Assignee changed from anonym to bertagaz
  • QA Check set to Dev Needed

> So unless there’s an option in Sikuli to specify where to put this files (like in $TEMP_DIR), I think we don’t have much choices but to periodically clean this files.

Yes, if our Jenkins skills don’t allow us to easily ensure proper ordering between process killing and filesystem cleanups.

> The cronjob is already ready in the feature branch.

Sorry if I didn’t get it right, but “periodically” doesn’t seem to reflect what we need. Let me explain. It sounds like what we need is a fresh isotesterN every time we want to run the test suite, and ideally “fresh” includes “all workspaces have been cleaned”. Thankfully, we reboot isotesterN each time we want to run a test suite on it. Therefore, doing this cleanup at boot time (and having the jenkins-slave block on it) would be more efficient and encode exactly what we want, rather than an approximation that will work most of the time. Did I miss anything?

#14 Updated by bertagaz 2015-11-18 04:45:22

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

intrigeri wrote:
> Sorry if I didn’t get it right, but “periodically” doesn’t seem to reflect what we need. Let me explain. It sounds like what we need is a fresh isotesterN every time we want to run the test suite, and ideally “fresh” includes “all workspaces have been cleaned”. Thankfully, we reboot isotesterN each time we want to run a test suite on it. Therefore, doing this cleanup at boot time (and having the jenkins-slave block on it) would be more efficient and encode exactly what we want, rather than an approximation that will work most of the time. Did I miss anything?

Nop, that makes sense. Now it’s a bit more complicated option than a cronjob, and I don’t think I’ll be able to implement it before end of November. So I propose meanwhile to merge the current cronjob bugfix branch so that the situation doesn’t ask someone to take care of that manually. Does it make sense too?

#15 Updated by intrigeri 2015-11-18 05:44:00

  • Assignee changed from intrigeri to bertagaz

> So I propose meanwhile to merge the current cronjob bugfix branch so that the situation doesn’t ask someone to take care of that manually. Does it make sense too?

I’ll try to set up the unit file I proposed later this week, possibly today or tomorrow; should be mostly trivial unless I missed what makes it more complicated. If I fail to do so, feel free to merge the cronjob thing. Deal?

#16 Updated by bertagaz 2015-11-18 06:15:46

  • Status changed from Confirmed to In Progress
  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Info Needed to Dev Needed

intrigeri wrote:
> > So I propose meanwhile to merge the current cronjob bugfix branch so that the situation doesn’t ask someone to take care of that manually. Does it make sense too?
>
> I’ll try to set up the unit file I proposed later this week, possibly today or tomorrow; should be mostly trivial unless I missed what makes it more complicated. If I fail to do so, feel free to merge the cronjob thing. Deal?

Awesome! I think apart from the fact I don’t know much about writing unit files and this first one of mine would probably be a bit long to write (even more considering jenkins-slave is still sysvinit style and I’m not sure how it interacts with unit files), I also forgot our testers were Jessie and we can use systemd. :)

#17 Updated by intrigeri 2015-11-20 02:34:12

  • Subject changed from Sikuli sometimes fills Jenkins workspaces with .vlog.png files to Jenkins workspace cleanup misses Sikuli's .vlog.png files

#18 Updated by intrigeri 2015-11-20 02:58:02

  • Assignee changed from intrigeri to bertagaz
  • % Done changed from 20 to 50
  • QA Check changed from Dev Needed to Ready for QA
  • Feature Branch changed from puppet-tails:bugfix/10467-remove-sikuli-leftovers to puppet-tails:master

Done, deployed on our 4 isotester:s, will be effective as soon as each of them reboots. Please review.

#19 Updated by bertagaz 2015-12-07 05:31:28

  • Status changed from In Progress to Resolved
  • Assignee deleted (bertagaz)
  • % Done changed from 50 to 100
  • QA Check deleted (Ready for QA)

Confirm the files get deleted on boot now, thx for it!