On Fri, May 17, 2024 at 01:02:56AM +0530, Mukesh Ojha wrote:
If we disable CONFIG_ZONE_DMA32 to make the selection of DMAdma_alloc_coherent()
memory from higher 4GB range. dma_alloc_coherant() api usage
inside qcom_scm_pas_init_image() which usage scm 32bit device
will fail for size of data passed less than PAGE_SIZE and
it will fallback to buddy pool to allocate memory from which
will fail.
I interpreted this as:
When CONFIG_ZONE_DMA32 is disabled, dma_alloc_coherent() fails when size
is < PAGE_SIZE. qcom_scm_pas_init_image() will fail to allocate using > dma_alloc_coherent() and incorrectly fall back to buddy pool.
This justification seems incorrect to me. None of the other
dma_alloc_coherent() users are page-aligning their requests in scm
driver. Is something else going on?
dma_alloc_coherent coherent
Convert the size to aligned to PAGE_SIZE before it gets pass
to dma_alloc_coherant(), so that it gets coherant memory in
lower 4GB from linux cma region.
Signed-off-by: Mukesh Ojha <quic_mojha@xxxxxxxxxxx>
---
drivers/firmware/qcom/qcom_scm.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
index 029ee5edbea6..6616048f1c33 100644
--- a/drivers/firmware/qcom/qcom_scm.c
+++ b/drivers/firmware/qcom/qcom_scm.c
@@ -562,6 +562,7 @@ static void qcom_scm_set_download_mode(u32 dload_mode)
int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, size_t size,
struct qcom_scm_pas_metadata *ctx)
{
+ size_t page_aligned_size;
dma_addr_t mdata_phys;
void *mdata_buf;
int ret;
@@ -579,7 +580,8 @@ int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, size_t size,
* data blob, so make sure it's physically contiguous, 4K aligned and
* non-cachable to avoid XPU violations.
*/
- mdata_buf = dma_alloc_coherent(__scm->dev, size, &mdata_phys,
+ page_aligned_size = PAGE_ALIGN(size + PAGE_SIZE);
Isn't PAGE_ALIGN(size) good enough? Why do you need to round up to the
2nd page? Maybe you thought PAGE_ALIGN was PAGE_ALIGN_DOWN ?