# New Ticket Created by James E Keenan # Please include the string: [perl #132651] # in the subject line of all future correspondence about this issue. # <URL: https://rt.perl.org/Ticket/Display.html?id=132651 > This afternoon while conducting a smoke test run in my normal manner on FreeBSD-11.0, I noticed that I was getting messages on STDERR saying "swap_pager: out of swap space". I would then get another message indicating that the kernel was killing a particular perl PID. I had never seen such messages before in this VM. Since I had to go out to a family dinner, I didn't have time to diagnose or correct the problem. When I came back six hours later, I saw that I had gotten failures in 3 of the 4 variants within the smoke test run in this file: ./dist/PathTools/t/cwd_enoent.t. (http://perl5.test-smoke.org/report/61071) I had never seen problems with this file before (but, as it turns out, that's in part because it's a brand new file). Usually when I have a test failure in one of my FreeBSD smokers, it's due to an intermittent resource constraint and I can get a PASS on the file by running it manually. However, when I tried to do that here, I got an endless stream of the following on STDERR: $ cd t;./perl harness -v ../dist/PathTools/t/cwd_enoent.t; cd - pwd: .: No such file or directory # <-- endlessly until Ctrl-C This output appears before any other output from the test file. Indeed, I never got to see any other output from this test file. I have reproduced this problem in: * the perl-current directory of my smoke test rig on FreeBSD-11.0; * in a blead of build in my regular git checkout on FreeBSD-11.0; * in a blead of build in my regular git checkout on FreeBSD-10.3; The errant file was only committed to blead earlier today: ##### commit d2e38af7de734aa1e317de7166c6995e432e2f30 Author: Zefram <zefram@fysh.org> Date: Sun Dec 24 11:09:54 2017 +0000 correct error returns from _perl_abs_path() ##### The commit should be reverted ASAP and tested in a branch on various OSes until its problems are well understood. Thank you very much. Jim KeenanThread Next