diff -u: What's New in Kernel Development
The OOM killer is a tough nut to crack. How can a system recover when it's violently thrashing and out of RAM? Once upon a time, you'd just have to reboot. And today, that still might be necessary, but less so, because the OOM killer attempts to identify and stop the process that seems to be causing the hangup. The problem is, it may not choose the right process every time. Another problem is that the whole thing is super tough and complex.
Michal Hocko recently tried to peel off a sliver to work on, taking the lead from Mel Gorman and Oleg Nesterov. Apparently, the current OOM killer would allocate an extra batch of memory just for the process it wanted to kill to actually give it enough breathing room to terminate properly. But under some circumstances, the process would accept the extra memory and still hang the system. Then with no more memory to dole out, the OOM killed couldn't try again, and it was time to hit the reset button.
Michal posted a patch to create a new kernel thread that would reclaim that extra memory if it went unused. Then the OOM killer could try the same thing on a different process and hopefully have a different result. And although there were no major objections to Michal's patch itself, a variety of folks objected to the idea of making any kind of incremental improvement to the OOM killer, when the Big Problem had not yet been solved.
The Big Problem, as described by Johannes Weiner, was how to resolve memory deadlocks in general. Only by solving that problem could the OOM killer successfully kill the processes it needed to, even to the point of killing all user processes, just to keep the kernel up.
But, Michal made a point of keeping the discussion clamped down to a consideration of only the small fixes he'd proposed. He acknowledged that he had no solution for the Big Problem, and he pointed out that no one else seemed to have a viable solution for the Big Problem either. And until something viable came up, Michal saw no point in stalling OOM killer development. If something could be done to improve it, he felt, then it should be done.
By and large everyone went along with this, but still, it's clear there's a lot of pressure on the OOM killer system to come up with some kind of new idea or at least to create a policy-based system that puts control of the choices of processes to kill into the hands of system administrators rather than the kernel algorithms themselves.
Linus Torvalds had some advice for anyone writing kernel code that needs to lock resources: it's probably better to use existing locking implementations rather than rolling your own—at least, until you know what you're doing. As he put it:
People need to realize that locking is harder than they think, and not cook up their own lock primitives using things like trylock without really thinking about it a lot.
Basically,
trylock()
on its own should never be used in a loop. The main use for trylock should be one of:1) Thing that you can just not do at all if you can't get the lock.
2) Avoiding ABBA deadlocks: if you have an A->B locking order, but you already hold B, instead of "drop B, then take A and B in the right order", you may decide first to
trylock(A)
, and if that fails, you then fall back on the "drop and relock in the right order".But if what you want to create is a "get lock using trylock", you need to be very aware of the cache coherency traffic issue at least.
It is possible that we should think about trying to introduce a new primitive for that
loop_try_lock()
thing. But it's probably not common enough to be worth it—we've had this issue before, but I think it's a "once every couple of years" kind of thing rather than anything that we need to worry about.The "locking is hard" issue is very real, though. We've traditionally had a lot of code that tried to do its own locking, and not getting the memory ordering right, etc. Things that happen to work on x86 but don't on other architectures, etc.