On Fri, Jul 27, 2001 at 05:54:01PM +0200, Tels wrote: > If I remember correctly, my solution does not slow it down except for one > "if ()". There are no additional stats because: > > * if the nlink count is > 2, all is well and we precede as usual. > * if it is below (aka 1 or 0), we have a filesystem with wrong nlink count > and do like dont_use_nlink variable was temp. == 1. In this case the slow > method is used, but it would have to be used anyway. I think this is a good way to do it. > Summary: File systems with correct nlinks are traveled fast, upon > entering one without we slow down and get faster again when the slow > filesystem is left. This decision is on a per-directory basis, so having > two slow filesystems (like /cdrom and /mnt/sambe/server) in your tree work. > > Unless I overlooked something really big, there would be no slowdown. nick@Bagpuss [Web]$ ls -al total 99574 dr-x--x--x 2 nick root 2048 Feb 2 1999 . dr-x--x--x 2 nick root 2048 Sep 21 1972 .. dr-x--x--x 2 nick root 2048 Feb 2 1999 ODMRS dr-x--x--x 2 nick root 2048 Feb 2 1999 P dr-x--x--x 2 nick root 2048 Feb 9 1999 foo But it's probably easier to patch the adfs filesystem on Linux to give a link count of 1 for directories. [there is no real . or .. entry - it's faking it. And even if Russell King doesn't want such a change, I'm compiling my kernels, not him :-)] And adfs is hardly "really big" > This is of course a good solution to increase the File::Find speed, but it > won't help all these people that have file systems with wrong nlink count Until someone can show me a spec that says that the link count on POSIX systems should behave in this way, I'd urge avoiding describing this as "wrong". Nicholas ClarkThread Previous | Thread Next