Skip to content
  1. Apr 14, 2015
  2. Apr 13, 2015
  3. Apr 11, 2015
  4. Apr 10, 2015
  5. Apr 09, 2015
    • Lendacky, Thomas's avatar
      amd-xgbe: Add support for the netdev Tx watchdog · a8373f1a
      Lendacky, Thomas authored
      
      
      Add support to be able to detect a hung Tx task by adding the netdev
      ndo_tx_timeout function callback. Do not set the watchdog_timeo value
      so as to use the system default time (currently 5 seconds).
      
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a8373f1a
    • Lendacky, Thomas's avatar
      amd-xgbe: Move Rx mode configuration into init · b876382b
      Lendacky, Thomas authored
      
      
      Currently a call to configure the Rx mode (promiscuous mode, all
      multicast mode, etc.) is made in xgbe_start separate from the xgbe_init
      function. This call to set the Rx mode should be part of the xgbe_init
      function so that calls to the init function don't have to be preceded
      with calls to configure the Rx mode.
      
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b876382b
    • Lendacky, Thomas's avatar
      amd-xgbe: Allow rx-frames coalescing to be changed anytime · 8dee19e6
      Lendacky, Thomas authored
      
      
      Currently the device must be down in order to update the rx-frames
      coalescing setting because the interrupt indicator is set in the
      descriptor data during initialization. Allow this setting to be changed
      while the device is up by moving the interrupt decision into the
      descriptor reset function and base the decision off of the supplied
      descriptor index value.
      
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8dee19e6
    • Alexander Duyck's avatar
      e100: Use dma_rmb/wmb where appropriate · c335869f
      Alexander Duyck authored
      
      
      Reduce the CPU overhead for transmit and receive by using lightweight dma_
      barriers instead of full barriers where they are applicable.
      
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Acked-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c335869f
    • Alexander Duyck's avatar
      i40e/i40evf: Use dma_rmb where appropriate · 67317166
      Alexander Duyck authored
      
      
      Update i40e and i40evf to use dma_rmb.  This should improve performance by
      decreasing the barrier overhead on strong ordered architectures.
      
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Acked-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      67317166
    • Alexander Duyck's avatar
      mlx4/mlx5: Use dma_wmb/rmb where appropriate · 12b3375f
      Alexander Duyck authored
      
      
      This patch should help to improve the performance of the mlx4 and mlx5 on a
      number of architectures.  For example, on x86 the dma_wmb/rmb equates out
      to a barrer() call as the architecture is already strong ordered, and on
      PowerPC the call works out to a lwsync which is significantly less expensive
      than the sync call that was being used for wmb.
      
      I placed the new barriers between any spots that seemed to be trying to
      order memory/memory reads or writes, if there are any spots that involved
      MMIO I left the existing wmb in place as the new barriers cannot order
      transactions between coherent and non-coherent memories.
      
      v2: Reduced the replacments to just the spots where I could clearly
          identify the usage pattern.
      
      Cc: Amir Vadai <amirv@mellanox.com>
      Cc: Ido Shamay <idos@mellanox.com>
      Cc: Eli Cohen <eli@mellanox.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      12b3375f
    • Alexander Duyck's avatar
      cxgb3/4/4vf: Update drivers to use dma_rmb/wmb where appropriate · 019be1cf
      Alexander Duyck authored
      
      
      Update the Chelsio Ethernet drivers to use the dma_rmb/wmb calls instead of
      the full barriers in order to improve performance.
      
      Cc: Santosh Raspatur <santosh@chelsio.com>
      Cc: Hariprasad S <hariprasad@chelsio.com>
      Cc: Casey Leedom <leedom@chelsio.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      019be1cf
    • Varka Bhadram's avatar
      mac802154: fix transmission power datatype · 23310f6f
      Varka Bhadram authored
      
      
      Netlink attribute for the power is s8. But for the driver level
      operations we are collection power level value into integer.
      It has to be change to s8 from int.
      
      Signed-off-by: default avatarVarka Bhadram <varkab@cdac.in>
      Acked-by: default avatarAlexander Aring <alex.aring@gmail.com>
      Signed-off-by: default avatarMarcel Holtmann <marcel@holtmann.org>
      23310f6f
    • WANG Cong's avatar
      vxlan: do not exit on error in vxlan_stop() · f13b1689
      WANG Cong authored
      
      
      We need to clean up vxlan despite vxlan_igmp_leave() fails.
      
      This fixes the following kernel warning:
      
       WARNING: CPU: 0 PID: 6 at lib/debugobjects.c:263 debug_print_object+0x7c/0x8d()
       ODEBUG: free active (active state 0) object type: timer_list hint: vxlan_cleanup+0x0/0xd0
       CPU: 0 PID: 6 Comm: kworker/u8:0 Not tainted 4.0.0-rc7+ #953
       Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
       Workqueue: netns cleanup_net
        0000000000000009 ffff88011955f948 ffffffff81a25f5a 00000000253f253e
        ffff88011955f998 ffff88011955f988 ffffffff8107608e 0000000000000000
        ffffffff814deba2 ffff8800d4e94000 ffffffff82254c30 ffffffff81fbe455
       Call Trace:
        [<ffffffff81a25f5a>] dump_stack+0x4c/0x65
        [<ffffffff8107608e>] warn_slowpath_common+0x9c/0xb6
        [<ffffffff814deba2>] ? debug_print_object+0x7c/0x8d
        [<ffffffff81076116>] warn_slowpath_fmt+0x46/0x48
        [<ffffffff814deba2>] debug_print_object+0x7c/0x8d
        [<ffffffff81666bf1>] ? vxlan_fdb_destroy+0x5b/0x5b
        [<ffffffff814dee02>] __debug_check_no_obj_freed+0xc3/0x15f
        [<ffffffff814df728>] debug_check_no_obj_freed+0x12/0x16
        [<ffffffff8117ae4e>] slab_free_hook+0x64/0x6c
        [<ffffffff8114deaa>] ? kvfree+0x31/0x33
        [<ffffffff8117dc66>] kfree+0x101/0x1ac
        [<ffffffff8114deaa>] kvfree+0x31/0x33
        [<ffffffff817d4137>] netdev_freemem+0x18/0x1a
        [<ffffffff817e8b52>] netdev_release+0x2e/0x32
        [<ffffffff815b4163>] device_release+0x5a/0x92
        [<ffffffff814bd4dd>] kobject_cleanup+0x49/0x5e
        [<ffffffff814bd3ff>] kobject_put+0x45/0x49
        [<ffffffff817d3fc1>] netdev_run_todo+0x26f/0x283
        [<ffffffff817d4873>] ? rollback_registered_many+0x20f/0x23b
        [<ffffffff817e0c80>] rtnl_unlock+0xe/0x10
        [<ffffffff817d4af0>] default_device_exit_batch+0x12a/0x139
        [<ffffffff810aadfa>] ? wait_woken+0x8f/0x8f
        [<ffffffff817c8e14>] ops_exit_list+0x2b/0x57
        [<ffffffff817c9b21>] cleanup_net+0x154/0x1e7
        [<ffffffff8108b05d>] process_one_work+0x255/0x4ad
        [<ffffffff8108af69>] ? process_one_work+0x161/0x4ad
        [<ffffffff8108b4b1>] worker_thread+0x1cd/0x2ab
        [<ffffffff8108b2e4>] ? process_scheduled_works+0x2f/0x2f
        [<ffffffff81090686>] kthread+0xd4/0xdc
        [<ffffffff8109eca3>] ? local_clock+0x19/0x22
        [<ffffffff810905b2>] ? __kthread_parkme+0x83/0x83
        [<ffffffff81a31c48>] ret_from_fork+0x58/0x90
        [<ffffffff810905b2>] ? __kthread_parkme+0x83/0x83
      
      For the long-term, we should handle NETDEV_{UP,DOWN} event
      from the lower device of a tunnel device.
      
      Fixes: 56ef9c90 ("vxlan: Move socket initialization to within rtnl scope")
      Cc: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: default avatarCong Wang <xiyou.wangcong@gmail.com>
      Acked-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f13b1689
  6. Apr 08, 2015
Loading