Skip to content
  1. Sep 14, 2015
    • Zhang Zhen's avatar
      seltests/zram: fix syntax error · 7ef7cc9f
      Zhang Zhen authored
      
      
      Not all shells define a variable UID. This is a bash and zsh feature only.
      In other shells, the UID variable is not defined, so here test command
      expands to [ != 0 ] which is a syntax error.
      
      Without this patch:
      root@HGH1000007090:/opt/work/linux/tools/testing/selftests/zram# sh zram.sh
      zram.sh: 8: [: !=: unexpected operator
      zram.sh : No zram.ko module or /dev/zram0 device file not found
      zram.sh : CONFIG_ZRAM is not set
      
      With this patch:
      root@HGH1000007090:/opt/work/linux/tools/testing/selftests/zram# sh ./zram.sh
      zram.sh : No zram.ko module or /dev/zram0 device file not found
      zram.sh : CONFIG_ZRAM is not set
      
      Signed-off-by: default avatarZhang Zhen <zhenzhang.zhang@huawei.com>
      Signed-off-by: default avatarShuah Khan <shuahkh@osg.samsung.com>
      7ef7cc9f
  2. Sep 11, 2015
  3. Sep 08, 2015
  4. Sep 04, 2015
  5. Sep 01, 2015
  6. Aug 29, 2015
    • Dan Williams's avatar
      libnvdimm, pmem: 'struct page' for pmem · 32ab0a3f
      Dan Williams authored
      
      
      Enable the pmem driver to handle PFN device instances.  Attaching a pmem
      namespace to a pfn device triggers the driver to allocate and initialize
      struct page entries for pmem.  Memory capacity for this allocation comes
      exclusively from RAM for now which is suitable for low PMEM to RAM
      ratios.  This mechanism will be expanded later for setting an "allocate
      from PMEM" policy.
      
      Cc: Boaz Harrosh <boaz@plexistor.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      32ab0a3f
    • Dan Williams's avatar
      libnvdimm, pfn: 'struct page' provider infrastructure · e1455744
      Dan Williams authored
      
      
      Implement the base infrastructure for libnvdimm PFN devices. Similar to
      BTT devices they take a namespace as a backing device and layer
      functionality on top. In this case the functionality is reserving space
      for an array of 'struct page' entries to be handed out through
      pfn_to_page(). For now this is just the basic libnvdimm-device-model for
      configuring the base PFN device.
      
      As the namespace claiming mechanism for PFN devices is mostly identical
      to BTT devices drivers/nvdimm/claim.c is created to house the common
      bits.
      
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      e1455744
  7. Aug 27, 2015
    • Ross Zwisler's avatar
      nd_blk: change aperture mapping from WC to WB · 67a3e8fe
      Ross Zwisler authored
      This should result in a pretty sizeable performance gain for reads.  For
      rough comparison I did some simple read testing using PMEM to compare
      reads of write combining (WC) mappings vs write-back (WB).  This was
      done on a random lab machine.
      
      PMEM reads from a write combining mapping:
      	# dd of=/dev/null if=/dev/pmem0 bs=4096 count=100000
      	100000+0 records in
      	100000+0 records out
      	409600000 bytes (410 MB) copied, 9.2855 s, 44.1 MB/s
      
      PMEM reads from a write-back mapping:
      	# dd of=/dev/null if=/dev/pmem0 bs=4096 count=1000000
      	1000000+0 records in
      	1000000+0 records out
      	4096000000 bytes (4.1 GB) copied, 3.44034 s, 1.2 GB/s
      
      To be able to safely support a write-back aperture I needed to add
      support for the "read flush" _DSM flag, as outlined in the DSM spec:
      
      http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf
      
      
      
      This flag tells the ND BLK driver that it needs to flush the cache lines
      associated with the aperture after the aperture is moved but before any
      new data is read.  This ensures that any stale cache lines from the
      previous contents of the aperture will be discarded from the processor
      cache, and the new data will be read properly from the DIMM.  We know
      that the cache lines are clean and will be discarded without any
      writeback because either a) the previous aperture operation was a read,
      and we never modified the contents of the aperture, or b) the previous
      aperture operation was a write and we must have written back the dirtied
      contents of the aperture to the DIMM before the I/O was completed.
      
      In order to add support for the "read flush" flag I needed to add a
      generic routine to invalidate cache lines, mmio_flush_range().  This is
      protected by the ARCH_HAS_MMIO_FLUSH Kconfig variable, and is currently
      only supported on x86.
      
      Signed-off-by: default avatarRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      67a3e8fe
    • Bamvor Jian Zhang's avatar
    • Bamvor Jian Zhang's avatar
      selftests: check before install · a7d0f078
      Bamvor Jian Zhang authored
      
      
      When the test cases is not supported by the current architecture
      the install files(TEST_PROGS, TEST_PROGS_EXTENDED and TEST_FILES)
      will be empty. Check it before installation to dismiss a failure
      reported by install program.
      
      Signed-off-by: default avatarBamvor Jian Zhang <bamvor.zhangjian@linaro.org>
      Signed-off-by: default avatarShuah Khan <shuahkh@osg.samsung.com>
      a7d0f078
    • Naresh Kamboju's avatar
      selftests/zram: Adding zram tests · f21fb798
      Naresh Kamboju authored
      
      
      zram: Compressed RAM based block devices
      ----------------------------------------
      The zram module creates RAM based block devices named /dev/zram<id>
      (<id> = 0, 1, ...). Pages written to these disks are compressed and stored
      in memory itself. These disks allow very fast I/O and compression provides
      good amounts of memory savings. Some of the usecases include /tmp storage,
      use as swap disks, various caches under /var and maybe many more :)
      
      Statistics for individual zram devices are exported through sysfs nodes at
      /sys/block/zram<id>/
      
      This patch is to validate the zram functionality. Test interacts with block
      device /dev/zram<id> and sysfs nodes /sys/block/zram<id>/
      
      zram.sh: sanity check of CONFIG_ZRAM and to run zram01 and zram02 tests
      zram01.sh: creates general purpose ram disks with different filesystems
      zram02.sh: creates block device for swap
      zram_lib.sh: create library with initialization/cleanup functions
      README: ZRAM introduction and Kconfig required.
      Makefile: To run zram tests
      
      zram test output
      -----------------
      ./zram.sh
      --------------------
      running zram tests
      --------------------
      /dev/zram0 device file found: OK
      set max_comp_streams to zram device(s)
      /sys/block/zram0/max_comp_streams = '2' (1/1)
      zram max streams: OK
      test that we can set compression algorithm
      supported algs: [lzo] lz4
      /sys/block/zram0/comp_algorithm = 'lzo' (1/1)
      zram set compression algorithm: OK
      set disk size to zram device(s)
      /sys/block/zram0/disksize = '2097152' (1/1)
      zram set disksizes: OK
      set memory limit to zram device(s)
      /sys/block/zram0/mem_limit = '2M' (1/1)
      zram set memory limit: OK
      make ext4 filesystem on /dev/zram0
      zram mkfs.ext4: OK
      mount /dev/zram0
      zram mount of zram device(s): OK
      fill zram0...
      zram0 can be filled with '1932' KB
      zram used 3M, zram disk sizes 2097152M
      zram compression ratio: 699050.66:1: OK
      zram cleanup
      zram01 : [PASS]
      
      /dev/zram0 device file found: OK
      set max_comp_streams to zram device(s)
      /sys/block/zram0/max_comp_streams = '2' (1/1)
      zram max streams: OK
      set disk size to zram device(s)
      /sys/block/zram0/disksize = '1048576' (1/1)
      zram set disksizes: OK
      set memory limit to zram device(s)
      /sys/block/zram0/mem_limit = '1M' (1/1)
      zram set memory limit: OK
      make swap with zram device(s)
      done with /dev/zram0
      zram making zram mkswap and swapon: OK
      zram swapoff: OK
      zram cleanup
      zram02 : [PASS]
      
      CC: Shuah Khan <shuahkh@osg.samsung.com>
      CC: Tyler Baker <tyler.baker@linaro.org>
      CC: Milosz Wasilewski <milosz.wasilewski@linaro.org>
      CC: Alexey Kodanev <alexey.kodanev@oracle.com>
      Signed-off-by: default avatarNaresh Kamboju <naresh.kamboju@linaro.org>
      Signed-off-by: default avatarAlexey Kodanev <alexey.kodanev@oracle.com>
      Reviewed-By: default avatarTyler Baker <tyler.baker@linaro.org>
      Signed-off-by: default avatarShuah Khan <shuahkh@osg.samsung.com>
      f21fb798
  8. Aug 19, 2015
    • Dan Williams's avatar
      libnvdimm, e820: make CONFIG_X86_PMEM_LEGACY a tristate option · 7a67832c
      Dan Williams authored
      
      
      We currently register a platform device for e820 type-12 memory and
      register a nvdimm bus beneath it.  Registering the platform device
      triggers the device-core machinery to probe for a driver, but that
      search currently comes up empty.  Building the nvdimm-bus registration
      into the e820_pmem platform device registration in this way forces
      libnvdimm to be built-in.  Instead, convert the built-in portion of
      CONFIG_X86_PMEM_LEGACY to simply register a platform device and move the
      rest of the logic to the driver for e820_pmem, for the following
      reasons:
      
      1/ Letting e820_pmem support be a module allows building and testing
         libnvdimm.ko changes without rebooting
      
      2/ All the normal policy around modules can be applied to e820_pmem
         (unbind to disable and/or blacklisting the module from loading by
         default)
      
      3/ Moving the driver to a generic location and converting it to scan
         "iomem_resource" rather than "e820.map" means any other architecture can
         take advantage of this simple nvdimm resource discovery mechanism by
         registering a resource named "Persistent Memory (legacy)"
      
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      7a67832c
  9. Aug 18, 2015
  10. Aug 17, 2015
  11. Aug 14, 2015
  12. Aug 06, 2015
  13. Aug 03, 2015
  14. Jul 31, 2015
  15. Jul 30, 2015
  16. Jul 28, 2015
  17. Jul 21, 2015
  18. Jul 17, 2015
  19. Jul 15, 2015
  20. Jul 10, 2015
Loading