On Thu, 2020-01-23 at 11:20 +0100, Alexander Graf wrote:[...]
Hi Alex,
On 22.01.20 18:43, Alexander Duyck wrote:
The overall guest size is kept fairly small to only a few GB while the test
is running. If the host memory were oversubscribed this patch set should
result in a performance improvement as swapping memory in the host can be
avoided.
I really like the approach overall. Voluntarily propagating free memory
from a guest to the host has been a sore point ever since KVM was
around. This solution looks like a very elegant way to do so.
The big piece I'm missing is the page cache. Linux will by default try
to keep the free list as small as it can in favor of page cache, so most
of the benefit of this patch set will be void in real world scenarios.
Agreed. This is a the next piece of this I plan to work on once this is
accepted. For now the quick and dirty approach is to essentially make use
of the /proc/sys/vm/drop_caches interface in the guest by either putting
it in a cronjob somewhere or to have it after memory intensive workloads.
Traditionally, this was solved by creating pressure from the host
through virtio-balloon: Exactly the piece that this patch set gets away
with. I never liked "ballooning", because the host has very limited
visibility into the actual memory utility of its guests. So leaving the
decision on how much memory is actually needed at a given point in time
should ideally stay with the guest.
What would keep us from applying the page hinting approach to inactive,
clean page cache pages? With writeback in place as well, we would slowly
propagate pages from
dirty -> clean -> clean, inactive -> free -> host owned
which gives a guest a natural path to give up "not important" memory.
I considered something similar. Basically one thought I had was to
essentially look at putting together some sort of epoch. When the host is
under memory pressure it would need to somehow notify the guest and then
the guest would start moving the epoch forward so that we start evicting
pages out of the page cache when the host is under memory pressure.
The big problem I see is that what I really want from a user's point of
view is a tuneable that says "Automatically free clean page cache pages
that were not accessed in the last X minutes". Otherwise we may run into
the risk of evicting some times in use page cache pages.
I have a hard time grasping the mm code to understand how hard that
would be to implement that though :).
Alex
Yeah, I am not exactly an expert on this either as I have only been
working int he MM tree for about a year now.
I have submitted this as a topic for LSF/MM summit[1] and I am hoping to
get some feedback on the best way to apply proactive memory pressure as
one of the subtopics if it is selected.