[Discuss] Corralling Processes on Linux
Kent Borg
kentborg at borg.org
Sat Jan 20 15:56:34 EST 2018
When I am dealing with files, I can easily delete them: if I am going to
be creating cruft, I can put them all in a single directory, I can put
subdirectories in that same directory, and when I am done, I can delete
the whole thing, do it recursively, and they are gone. Easy.
Is there a way to do this with daemonized processes? I create an oddball
collection, and want the ability to kill the whole lot.
Seems process group IDs might bark up this tree, but it doesn't look
like I can tag a whole funny-shaped tree of processes with the same ID
(this true?). And, my experiments along these lines have run into
"operation not permitted"; I don't want to have to do this as root. (In
the file example: I don't need to be root to put files and directories
in a directory...)
I thought about dropping a unique string on the command line of each
process. I won't be able to reliably find it with ps because it
truncates, but I could crawl through /proc/*/cmdline and kill the ones I
find. (Is there a way to do this on Mac OS?)
I thought about creating all these processes as a different user, and
then killing everything owned by that user, but that probably requires
root again (if that other user isn't me), and maybe I don't want to kill
/everything/ (a login?) owned by the user.
Why I am doing this: I am playing with lots of different processes
communicating with each other, maybe some coming and going
incrementally. I want the ability occasionally kill them all and start
from a clean slate. ("Oh, I had two of that one running for the last
hour! Silly /me/ to waste all that time!" ...I want to avoid that set of
bugs.)
Suggestions?
Thanks,
-kb
More information about the Discuss
mailing list