[kata-dev] How to build test case for dax windows size?

李瑞友 liruiyou at huayun.com
Fri Jul 9 06:40:06 UTC 2021


Ok, got it, this is very useful, thank you


在 2021/7/9 上午6:03,“Vivek Goyal”<vgoyal at redhat.com> 写入:

    On Thu, Jul 08, 2021 at 02:58:25AM +0000, 李瑞友 wrote:
    > The dax window I set is not very large (for example, 512MB). I am worried that the setting is too large. After a lot of VM workloads are started, it will take up a lot of memory (I know it is a mapping technology). If this setting is large, will it be a problem? , Such as 10G, 50G?
    
    DAX window range reclaim is very slow and it kills performance. So if DAX
    window is small while the active data set being accessed is bigger than
    DAX window, then more reclaim will happen and performance will be bad.
    
    DAX will perform best when active data set is smaller than DAX window.
    
    Memory usage approximately should be as follows. We allocate following
    per 2MB dax window range.
    
    - 512 struct page  (one struct page for each 4K range)
    - One "struct fuse_dax_mapping".
    
    Lets say struct page is around 64bytes. "struct fuse_dax_mapping" is
    probably the noise given there are is only one of these per 2MB. So
    most of the memory should be by "struct page".
    
    Per 2MB of dax window struct page should consume about 512 * 64 = 32K
    of memory.
    
    1GB of dax window: 512 * 32K = 16MB of memory usage.
    4GB of dax window: 64MB of memory usage.
    8GB of dax window: 128MB of memory usage.
    16GB of dax window: 256MB of memory usage.
    32GB of dax window: 512MB of memory usage.
    
    So while memory usage is not trivial it is not too bad as well. I would
    say, try 8GB or 16GB of dax window if it fits well into your use case.
    
    Thanks
    Vivek
    
    > The following is my test performance data, it seems that the results are quite different, I don’t know why?
    > 
    > fio --name=small-file-multi-read --directory=/usr/share/nginx/html \
    >     --rw=randread --file_service_type=sequential \
    >     --bs=4k --filesize=10M --nrfiles=100 \
    >     --runtime=60 --time_based --numjobs=1 
    >  ...   
    > small-file-multi-read: (groupid=0, jobs=1): err= 0: pid=190: Mon Jul  5 07:05:18 2021
    >   read: IOPS=12.0k, BW=46.0MiB/s (49.2MB/s)(212MiB/4505msec)
    > 
    > fio --name=5G-bigfile-rand-read     --directory=/usr/share/nginx/html     --rw=randread --size=5G --bs=4k     --runtime=60 --time_based     --numjobs=1 
    > ...
    > 5G-bigfile-rand-read: (groupid=0, jobs=1): err= 0: pid=184: Mon Jul  5 06:57:08 2021
    >   read: IOPS=1255, BW=5024KiB/s (5144kB/s)(294MiB/60002msec)
    > 
    > 
    > 
    > 在 2021/7/7 下午8:44,“Vivek Goyal”<vgoyal at redhat.com> 写入:
    > 
    >     On Fri, Jul 02, 2021 at 10:57:43AM +0200, Fabiano Fidêncio wrote:
    >     > Ryo Li,
    >     > 
    >     > On Thu, Jul 1, 2021 at 11:49 AM 李瑞友 <liruiyou at huayun.com> wrote:
    >     > >
    >     > > Hi guys
    >     > >
    >     > > I want to build a test scenario to see the difference in performance between the size of DAX windows and the size of read files. But it's not that I don't know how to build.
    >     > >
    >     > > I tried the following two methods, but it seems that the performance results are similar
    >     
    >     I am not sure what's the expectation. Why changing file size should change
    >     the throughput significantly. If DAX window is big enough to accomodate
    >     both small file and large file completely, then I/O rate will probably
    >     be same/similar.
    >     
    >     Thanks
    >     Vivek
    >     
    >     > >
    >     > >
    >     > >
    >     > > fio -filename=/usr/share/nginx/html/400MBfile --rw=randread --loops=10 --group_reporting --name=400MBfile
    >     > >
    >     > >
    >     > >
    >     > > fio -filename=/usr/share/nginx/html/5Gfile --rw=randread -bs=16k --group_reporting --name=5Gfile
    >     > >
    >     > >
    >     > >
    >     > > sh-4.4# fio -filename=/usr/share/nginx/html/5Gfile  --rw=randread -bs=16k  --group_reporting --name=5Gfile
    >     > >
    >     > > 5Gfile: (g=0): rw=randread, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=psync, iodepth=1
    >     > >
    >     > > fio-3.19
    >     > >
    >     > > Starting 1 process
    >     > >
    >     > > Jobs: 1 (f=1): [r(1)][100.0%][r=148MiB/s][r=9450 IOPS][eta 00m:00s]
    >     > >
    >     > > 5Gfile: (groupid=0, jobs=1): err= 0: pid=139: Thu Jul  1 09:38:07 2021
    >     > >
    >     > >   read: IOPS=9931, BW=155MiB/s (163MB/s)(5120MiB/32993msec)
    >     > >
    >     > >     clat (usec): min=52, max=18734, avg=95.79, stdev=69.49
    >     > >
    >     > >      lat (usec): min=53, max=18735, avg=96.47, stdev=69.53
    >     > >
    >     > >     clat percentiles (usec):
    >     > >
    >     > >      |  1.00th=[   61],  5.00th=[   64], 10.00th=[   66], 20.00th=[   71],
    >     > >
    >     > >      | 30.00th=[   75], 40.00th=[   79], 50.00th=[   82], 60.00th=[   88],
    >     > >
    >     > >      | 70.00th=[   97], 80.00th=[  114], 90.00th=[  141], 95.00th=[  163],
    >     > >
    >     > >      | 99.00th=[  235], 99.50th=[  293], 99.90th=[  734], 99.95th=[  807],
    >     > >
    >     > >      | 99.99th=[ 1287]
    >     > >
    >     > >    bw (  KiB/s): min=87471, max=191168, per=100.00%, avg=159511.97, stdev=18299.18, samples=65
    >     > >
    >     > >    iops        : min= 5466, max=11948, avg=9969.48, stdev=1143.77, samples=65
    >     > >
    >     > >   lat (usec)   : 100=72.93%, 250=26.28%, 500=0.52%, 750=0.19%, 1000=0.06%
    >     > >
    >     > >   lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%
    >     > >
    >     > >   cpu          : usr=6.04%, sys=46.45%, ctx=329379, majf=1, minf=13
    >     > >
    >     > >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    >     > >
    >     > >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    >     > >
    >     > >      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    >     > >
    >     > >      issued rwts: total=327680,0,0,0 short=0,0,0,0 dropped=0,0,0,0
    >     > >
    >     > >      latency   : target=0, window=0, percentile=100.00%, depth=1
    >     > >
    >     > >
    >     > >
    >     > > Run status group 0 (all jobs):
    >     > >
    >     > >    READ: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=5120MiB (5369MB), run=32993-32993msec
    >     > 
    >     > 
    >     > I've looped in some folks from Red Hat and Intel who have been working
    >     > with virtiofs, either on performance, on integration, or on virtiofs
    >     > itself.
    >     > I think they'll be able to provide you with some valuable feedback.
    >     > 
    >     > Please, mind that due to July 4th I think folks from the US may be off
    >     > for the next few days.
    >     > 
    >     > Best Regards,
    >     > -- 
    >     > Fabiano Fidêncio
    >     > 
    >     
    >     
    > 
    
    



More information about the kata-dev mailing list