Re: Possible bio merging breakage in mp bio rework
From: Ming Lei
Date: Sat Apr 06 2019 - 08:31:03 EST
On Sat, Apr 06, 2019 at 09:09:12AM +0300, Nikolay Borisov wrote:
>
>
> On 6.04.19 Ð. 3:16 Ñ., Ming Lei wrote:
> > Hi Nikolay,
> >
> > On Fri, Apr 05, 2019 at 07:04:18PM +0300, Nikolay Borisov wrote:
> >> Hello Ming,
> >>
> >> Following the mp biovec rework what is the maximum
> >> data that a bio could contain? Should it be PAGE_SIZE * bio_vec
> >
> > There isn't any maximum data limit on the bio submitted from fs,
> > and block layer will make the final bio sent to driver correct
> > by applying all kinds of queue limit, such as max segment size,
> > max segment number, max sectors, ...
> >
> >> or something else? Currently I can see bios as large as 127 megs
> >> on sequential workloads, I got prompted to this since btrfs has a
> >> memory allocation that is dependent on the data in the bio and this
> >> particular memory allocation started failing with order 6 allocs.
> >
> > Could you share us the code? I don't see why order 6 allocs is a must.
>
> When a bio is submitted btrfs has to calculate the checksum for it, this
> happens in btrfs_csum_one_bio. Said checksums are stored in an
> kmalloc'ed array, whose size is calculated as:
>
> 32 + bio_size / btrfs' block size (usually 4k). So for a 127mb bio that
> would be: 32 * ((134184960Ã4096) * 4) = 127k. We'd make an order 3
> allocation. Admittedly the code in btrfs should know better rather than
> make unbounded allocations without a fallback, but bio suddenly becoming
> rather unbounded in their size caught us offhand.
OK, thanks for your explanation.
Given it is one btrfs specific feature, I'd suggest you set one max size for
btrfs bio, for example, suppose the max checksum array is 4k, then the max
bio size can be calculated as:
(4k - 32) * btrfs's block size
which should be big enough.
Thanks,
Ming