Re: [PATCH] x86: Consider multiple nodes in a single socket to be "sane"
From: Ingo Molnar
Date: Tue Sep 16 2014 - 02:44:12 EST
* Chuck Ebbert <cebbert.lkml@xxxxxxxxx> wrote:
> On Tue, 16 Sep 2014 05:29:20 +0200
> Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> > On Mon, Sep 15, 2014 at 03:26:41PM -0700, Dave Hansen wrote:
> > >
> > > I'm getting the spew below when booting with Haswell (Xeon
> > > E5-2699) CPUs and the "Cluster-on-Die" (CoD) feature
> > > enabled in the BIOS.
> >
> > What is that cluster-on-die thing? I've heard it before but
> > never could find anything on it.
>
> Each CPU has 2.5MB of L3 connected together in a ring that
> makes it all act like a single shared cache. The HW tries to
> place the data so it's closest to the CPU that uses it. On the
> larger processors there are two rings with an interconnect
> between them that adds latency if a cache fetch has to cross
> that. CoD breaks that connection and effectively gives you two
> nodes on one die.
Note that that's not really a 'NUMA node' in the way lots of
places in the kernel assume it: permanent placement assymetry
(and access cost assymetry) of RAM.
It's a new topology construct that needs new handling (and
probably a new mask): Non Uniform Cache Architecture (NUCA)
or so.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/