Friday, January 17, 2020 4:13 PM, Rob Miller:
Subject: Re: [PATCH 3/5] vDPA: introduce vDPA bus
[...]On 2020/1/17 äå8:13, Michael S. Tsirkin wrote:
On Thu, Jan 16, 2020 at 08:42:29PM +0800, Jason Wang wrote:
What is the concern? Traversing the rb tree or fully replace the on-chip IOMMU translations?+ * @set_map:Â Â Â Â Â Â Â Â Â Â Â Â Set device memory mapping, optionalOK so any change just swaps in a completely new mapping?
+ *Â Â Â Â Â Â Â Â Â Â Â Â Â and only needed for device that using
+ *Â Â Â Â Â Â Â Â Â Â Â Â Â device specific DMA translation
+ *Â Â Â Â Â Â Â Â Â Â Â Â Â (on-chip IOMMU)
+ *Â Â Â Â Â Â Â Â Â Â Â Â Â @vdev: vdpa device
+ *Â Â Â Â Â Â Â Â Â Â Â Â Â @iotlb: vhost memory mapping to be
+ *Â Â Â Â Â Â Â Â Â Â Â Â Â used by the vDPA
+ *Â Â Â Â Â Â Â Â Â Â Â Â Â Returns integer: success (0) or error (< 0)
Wouldn't this make minor changes such as memory hotplug
quite expensive?
If the latest, then I think we can take such optimization on the driver level (i.e. to update only the diff between the two mapping).
If the first one, then I think memory hotplug is a heavy flow regardless. Do you think the extra cycles for the tree traverse will be visible in any way?
My understanding is that the incremental updating of the on chip IOMMUYes exact. For Mellanox case for instance many optimization can be performed on a given memory layout.
may degrade the performance. So vendor vDPA drivers may want to know
all the mappings at once.
Technically, we can keep the incremental APIWhat will be the trigger for the driver to know it received the last mapping on this series and it can now push it to the on-chip IOMMU?
here and let the vendor vDPA drivers to record the full mapping
internally which may slightly increase the complexity of vendor driver.
We need more inputs from vendors here.
Thanks