Re: [PATCH 2/3] sched/deadline: fix bandwidth check/update when migrating tasks between exclusive cpusets
From: Peter Zijlstra
Date: Fri Sep 19 2014 - 17:26:03 EST
On Fri, Sep 19, 2014 at 10:22:40AM +0100, Juri Lelli wrote:
> Exclusive cpusets are the only way users can restrict SCHED_DEADLINE tasks
> affinity (performing what is commonly called clustered scheduling).
> Unfortunately, such thing is currently broken for two reasons:
>
> - No check is performed when the user tries to attach a task to
> an exlusive cpuset (recall that exclusive cpusets have an
> associated maximum allowed bandwidth).
>
> - Bandwidths of source and destination cpusets are not correctly
> updated after a task is migrated between them.
>
> This patch fixes both things at once, as they are opposite faces
> of the same coin.
>
> The check is performed in cpuset_can_attach(), as there aren't any
> points of failure after that function. The updated is split in two
> halves. We first reserve bandwidth in the destination cpuset, after
> we pass the check in cpuset_can_attach(). And we then release
> bandwidth from the source cpuset when the task's affinity is
> actually changed. Even if there can be time windows when sched_setattr()
> may erroneously fail in the source cpuset, we are fine with it, as
> we can't perfom an atomic update of both cpusets at once.
The thing I cannot find is if we correctly deal with updates to the
cpuset. Say we first setup 2 (exclusive) sets A:cpu0 B:cpu1-3. Then
assign tasks and then update the cpu masks like: B:cpu2,3, A:cpu1,2.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/