- Dec 15, 2020
-
-
Jakub Kicinski authored
Andra Paraschiv says: ==================== vsock: Add flags field in the vsock address vsock enables communication between virtual machines and the host they are running on. Nested VMs can be setup to use vsock channels, as the multi transport support has been available in the mainline since the v5.5 Linux kernel has been released. Implicitly, if no host->guest vsock transport is loaded, all the vsock packets are forwarded to the host. This behavior can be used to setup communication channels between sibling VMs that are running on the same host. One example can be the vsock channels that can be established within AWS Nitro Enclaves (see Documentation/virt/ne_overview.rst). To be able to explicitly mark a connection as being used for a certain use case, add a flags field in the vsock address data structure. The value of the flags field is taken into consideration when the vsock transport is assigned. This way can distinguish between different use cases, such as nested VMs / local communication and sibling VMs. The flags field can be set in the user space application connect logic. On the listen path, the field can be set in the kernel space logic. ==================== Link: https://lore.kernel.org/r/20201214161122.37717-1-andraprs@amazon.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Andra Paraschiv authored
The vsock flags field can be set in the connect path (user space app) and the (listen) receive path (kernel space logic). When the vsock transport is assigned, the remote CID is used to distinguish between types of connection. Use the vsock flags value (in addition to the CID) from the remote address to decide which vsock transport to assign. For the sibling VMs use case, all the vsock packets need to be forwarded to the host, so always assign the guest->host transport if the VMADDR_FLAG_TO_HOST flag is set. For the other use cases, the vsock transport assignment logic is not changed. Changelog v3 -> v4 * Update the "remote_flags" local variable type to reflect the change of the "svm_flags" field to be 1 byte in size. v2 -> v3 * Update bitwise check logic to not compare result to the flag value. v1 -> v2 * Use bitwise operator to check the vsock flag. * Use the updated "VMADDR_FLAG_TO_HOST" flag naming. * Merge the checks for the g2h transport assignment in one "if" block. Signed-off-by:
Andra Paraschiv <andraprs@amazon.com> Reviewed-by:
Stefano Garzarella <sgarzare@redhat.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Andra Paraschiv authored
The vsock flags can be set during the connect() setup logic, when initializing the vsock address data structure variable. Then the vsock transport is assigned, also considering this flags field. The vsock transport is also assigned on the (listen) receive path. The flags field needs to be set considering the use case. Set the value of the vsock flags of the remote address to the one targeted for packets forwarding to the host, if the following conditions are met: * The source CID of the packet is higher than VMADDR_CID_HOST. * The destination CID of the packet is higher than VMADDR_CID_HOST. Changelog v3 -> v4 * No changes. v2 -> v3 * No changes. v1 -> v2 * Set the vsock flag on the receive path in the vsock transport assignment logic. * Use bitwise operator for the vsock flag setup. * Use the updated "VMADDR_FLAG_TO_HOST" flag naming. Signed-off-by:
Andra Paraschiv <andraprs@amazon.com> Reviewed-by:
Stefano Garzarella <sgarzare@redhat.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Andra Paraschiv authored
Check if the provided flags value from the vsock address data structure includes the supported flags in the corresponding kernel version. The first byte of the "svm_zero" field is used as "svm_flags", so add the flags check instead. Changelog v3 -> v4 * New patch in v4. Signed-off-by:
Andra Paraschiv <andraprs@amazon.com> Reviewed-by:
Stefano Garzarella <sgarzare@redhat.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Andra Paraschiv authored
Add VMADDR_FLAG_TO_HOST vsock flag that is used to setup a vsock connection where all the packets are forwarded to the host. Then, using this type of vsock channel, vsock communication between sibling VMs can be built on top of it. Changelog v3 -> v4 * Update the "VMADDR_FLAG_TO_HOST" value, as the size of the field has been updated to 1 byte. v2 -> v3 * Update comments to mention when the flag is set in the connect and listen paths. v1 -> v2 * New patch in v2, it was split from the first patch in the series. * Remove the default value for the vsock flags field. * Update the naming for the vsock flag to "VMADDR_FLAG_TO_HOST". Signed-off-by:
Andra Paraschiv <andraprs@amazon.com> Reviewed-by:
Stefano Garzarella <sgarzare@redhat.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Andra Paraschiv authored
vsock enables communication between virtual machines and the host they are running on. With the multi transport support (guest->host and host->guest), nested VMs can also use vsock channels for communication. In addition to this, by default, all the vsock packets are forwarded to the host, if no host->guest transport is loaded. This behavior can be implicitly used for enabling vsock communication between sibling VMs. Add a flags field in the vsock address data structure that can be used to explicitly mark the vsock connection as being targeted for a certain type of communication. This way, can distinguish between different use cases such as nested VMs and sibling VMs. This field can be set when initializing the vsock address variable used for the connect() call. Changelog v3 -> v4 * Update the size of "svm_flags" field to be 1 byte instead of 2 bytes. v2 -> v3 * Add "svm_flags" as a new field, not reusing "svm_reserved1". v1 -> v2 * Update the field name to "svm_flags". * Split the current patch in 2 patches. Signed-off-by:
Andra Paraschiv <andraprs@amazon.com> Reviewed-by:
Stefano Garzarella <sgarzare@redhat.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Tariq Toukan authored
With NETIF_F_HW_TLS_TX packets are encrypted in HW. This cannot be logically done when HW_CSUM offload is off. Fixes: 2342a851 ("net: Add TLS TX offload features") Signed-off-by:
Tariq Toukan <tariqt@nvidia.com> Reviewed-by:
Boris Pismenny <borisp@nvidia.com> Link: https://lore.kernel.org/r/20201213143929.26253-1-tariqt@nvidia.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Alexander Duyck authored
There are cases where a fastopen SYN may trigger either a ICMP_TOOBIG message in the case of IPv6 or a fragmentation request in the case of IPv4. This results in the socket stalling for a second or more as it does not respond to the message by retransmitting the SYN frame. Normally a SYN frame should not be able to trigger a ICMP_TOOBIG or ICMP_FRAG_NEEDED however in the case of fastopen we can have a frame that makes use of the entire MSS. In the case of fastopen it does, and an additional complication is that the retransmit queue doesn't contain the original frames. As a result when tcp_simple_retransmit is called and walks the list of frames in the queue it may not mark the frames as lost because both the SYN and the data packet each individually are smaller than the MSS size after the adjustment. This results in the socket being stalled until the retransmit timer kicks in and forces the SYN frame out again without the data attached. In order to resolve this we can reduce the MSS the packets are compared to in tcp_simple_retransmit to -1 for cases where we are still in the TCP_SYN_SENT state for a fastopen socket. Doing this we will mark all of the packets related to the fastopen SYN as lost. Signed-off-by:
Alexander Duyck <alexanderduyck@fb.com> Signed-off-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
Yuchung Cheng <ycheng@google.com> Link: https://lore.kernel.org/r/160780498125.3272.15437756269539236825.stgit@localhost.localdomain Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
Currently ocelot_set_rx_mode calls ocelot_mact_learn directly, which has a very nice ocelot_mact_wait_for_completion at the end. Introduced in commit 639c1b26 ("net: mscc: ocelot: Register poll timeout should be wall time not attempts"), this function uses readx_poll_timeout which triggers a lot of lockdep warnings and is also dangerous to use from atomic context, potentially leading to lockups and panics. Steen Hegelund added a poll timeout of 100 ms for checking the MAC table, a duration which is clearly absurd to poll in atomic context. So we need to defer the MAC table access to process context, which we do via a dynamically allocated workqueue which contains all there is to know about the MAC table operation it has to do. Signed-off-by:
Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by:
Florian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20201212191612.222019-1-vladimir.oltean@nxp.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Bongsu Jeon authored
add the code to release the nfc firmware when the firmware image size is wrong. Fixes: c04c674f ("nfc: s3fwrn5: Add driver for Samsung S3FWRN5 NFC Chip") Signed-off-by:
Bongsu Jeon <bongsu.jeon@samsung.com> Reviewed-by:
Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20201213095850.28169-1-bongsu.jeon@samsung.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
This code is copying strings in 64 bit quantities, the device returns them in big endian. As long as we store in big endian IOW endian on both sides matches, we're good, so swap to_be64, not from be64. This fixes ~60 sparse warnings. Link: https://lore.kernel.org/r/20201212234426.177015-1-kuba@kernel.org Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Merge tag 'linux-can-next-for-5.11-20201214' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2020-12-14 All 7 patches are by me and target the m_can driver. First there are 4 cleanup patches (fix link to doc, fix coding style, uniform variable name usage, mark function as static). Then the driver is converted to pm_runtime_resume_and_get(). The next patch lets the m_can class driver allocate the driver's private data, to get rid of one level of indirection. And the last patch consistently uses struct m_can_classdev as drvdata over all binding drivers. * tag 'linux-can-next-for-5.11-20201214' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next: can: m_can: use struct m_can_classdev as drvdata can: m_can: let m_can_class_allocate_dev() allocate driver specific private data can: m_can: m_can_clk_start(): make use of pm_runtime_resume_and_get() can: m_can: m_can_config_endisable(): mark as static can: m_can: use cdev as name for struct m_can_classdev uniformly can: m_can: convert indention to kernel coding style can: m_can: update link to M_CAN user manual ==================== Link: https://lore.kernel.org/r/20201214133145.442472-1-mkl@pengutronix.de Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Ido Schimmel says: ==================== mlxsw: Introduce initial XM router support This patch set implements initial eXtended Mezzanine (XM) router support. The XM is an external device connected to the Spectrum-{2,3} ASICs using dedicated Ethernet ports. Its purpose is to increase the number of routes that can be offloaded to hardware. This is achieved by having the ASIC act as a cache that refers cache misses to the XM where the FIB is stored and LPM lookup is performed. Future patch sets will add more sophisticated cache flushing and selftests that utilize cache counters on the ASIC, which we plan to expose via devlink-metric [1]. Patch set overview: Patches #1-#2 add registers to insert/remove routes to/from the XM and to enable/disable it. Patch #3 utilizes these registers in order to implement XM-specific router low-level operations. Patches #4-#5 query from firmware the availability of the XM and the local ports that are used to connect the ASIC to the XM, so that netdevs will not be created for them. Patches #6-#8 initialize the XM by configuring its cache parameters. Patch #9-#10 implement cache management, so that LPM lookup will be correctly cached in the ASIC. Patches #11-#13 implement cache flushing, so that routes insertions/removals to/from the XM will flush the affected entries in the cache. Patch #14 configures the ASIC to allocate half of its memory for the cache, so that room will be left for other entries (e.g., FDBs, neighbours). Patch #15 starts using the XM for IPv4 route offload, when available. [1] https://lore.kernel.org/netdev/20200817125059.193242-1-idosch@idosch.org/ ==================== Link: https://lore.kernel.org/r/20201214113041.2789043-1-idosch@idosch.org Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
In case the eXtended mezzanine is present on the system, use it for IPv4 router offload. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
Set a profile option to instruct FW to use 1/2 of KVH for XLT cache, not the whole one. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
Upon route insertion and removal, it is needed to flush possibly cached entries from the XM cache. Extend XM op context to carry information needed for the flush. Implement the flush in delayed work since for HW design reasons there is a need to wait 50usec before the flush can be done. If during this time comes the same flush request, consolidate it to the first one. Implement this queued flushes by a hashtable. v2: * Fix GENMASK() high bit Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The RLPMCE allows disabling the LPM cache. Can be changed on the fly. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The RLCMLD register is used to bulk delete the XLT-LPM cache ML entries. This can be used by SW when L is increased or decreased, thus need to remove entries with old ML values. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
There is a table that assigns L-value per M-index. The L is always the biggest from the currently inserted prefixes. Setup a hashtable to track the M-index information and the prefixes that are related to it. Ensure the L-value is always correctly set. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The XRMT configures the M-Table for the XLT-LPM. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
During the router init flow, call into XM code and initialize couple of items needed for XM functionality: 1) Query the capabilities and sizes. Check the XM device id. 2) Initialize the M-value. Note that currently the M-value is set fixed to 16 for IPv4. In future this may change to better cover the actual inserted routes. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The XLTQ is used to query HW for XM-related info. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The RXLTM configures and selects the M for the XM lookups. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
Use the info stored in the bus_info struct about the eXtended mezanine connected ports and don't expose them. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The output of boardinfo command was extended to contain information about XM. Indicates if is present and in case it is, tells which localports are used for the connection. So parse this info and store it in bus_info passed up to the driver. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
In order to offload entries to XM, implement a set of low-level functions to work with LPM trees in XM and also to pack and write FIB entries into XM. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The RXLTE enables XLT (eXtended Lookup Table) LPM lookups if a capable XM is present on the system. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jiri Pirko authored
The XMDR allows direct access to the XM device via the switch. Signed-off-by:
Jiri Pirko <jiri@nvidia.com> Signed-off-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Michael Chan says: ==================== bnxt_en: Improve firmware flashing. This patchset improves firmware flashing in 2 ways: - If firmware returns NO_SPACE error during flashing, the driver will create the UPDATE directory with more staging area and retry. - Instead of allocating a big DMA buffer for the entire contents of the firmware package size, fallback to a smaller buffer to DMA the contents in multiple DMA operations. ==================== Link: https://lore.kernel.org/r/1607860306-17244-1-git-send-email-michael.chan@broadcom.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Michael Chan authored
The current scheme allocates a DMA buffer as big as the requested firmware package file and DMAs the contents to firmware in one operation. The buffer size can be several hundred kilo bytes and the driver may not be able to allocate the memory. This will cause firmware upgrade to fail. Improve the scheme by using smaller DMA blocks and calling firmware to DMA each block in a batch mode. Older firmware can cause excessive NVRAM erases if the block size is too small so we try to allocate a 256K buffer to begin with and size it down successively if we cannot allocate the memory. Reviewed-by:
Edwin Peer <edwin.peer@broadcom.com> Signed-off-by:
Michael Chan <michael.chan@broadcom.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Pavan Chebbi authored
In bnxt_flash_package_from_fw_obj(), if firmware returns the NO_SPACE error, call __bnxt_flash_nvram() to create the UPDATE directory and then loop back and retry one more time. Since the first try may fail, we use the silent version to send the firmware commands. Reviewed-by:
Vasundhara Volam <vasundhara-v.volam@broadcom.com> Reviewed-by:
Edwin Peer <edwin.peer@broadcom.com> Signed-off-by:
Pavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by:
Michael Chan <michael.chan@broadcom.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Pavan Chebbi authored
On NICs with a smaller NVRAM, FW installation may fail after multiple updates due to fragmentation. The driver can retry when FW returns a special error code. To faciliate the retry, we restructure the logic that performs the flashing in a loop. The actual retry logic will be added in the next patch. Signed-off-by:
Pavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by:
Michael Chan <michael.chan@broadcom.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Michael Chan authored
This function will be modified in the next patch to retry flashing the firmware in a loop. To facilate that, we rearrange the code so that the steps that only need to be done once before the loop will be moved to the top of the function. Signed-off-by:
Michael Chan <michael.chan@broadcom.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Pavan Chebbi authored
Refactor bnxt_flash_nvram() into __bnxt_flash_nvram() that takes an additional dir_item_len parameter. The new function will be used in subsequent patches with the dir_item_len parameter set to create the UPDATE directory during flashing. Signed-off-by:
Pavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by:
Michael Chan <michael.chan@broadcom.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Marcin Wojtas authored
Since its creation Marvell NIC driver for Armada 375/7k8k and CN913x SoC families mvpp2 has been lacking an entry in MAINTAINERS, which sometimes lead to unhandled bugs that persisted across several kernel releases. Signed-off-by:
Marcin Wojtas <mw@semihalf.com> Acked-by:
Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/20201211165114.26290-1-mw@semihalf.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Vasily Averin authored
syzbot reproduces BUG_ON in skb_checksum_help(): tun creates (bogus) skb with huge partial-checksummed area and small ip packet inside. Then ip_rcv trims the skb based on size of internal ip packet, after that csum offset points beyond of trimmed skb. Then checksum_tg() called via netfilter hook triggers BUG_ON: offset = skb_checksum_start_offset(skb); BUG_ON(offset >= skb_headlen(skb)); To work around the problem this patch forces pskb_trim_rcsum_slow() to return -EINVAL in described scenario. It allows its callers to drop such kind of packets. Link: https://syzkaller.appspot.com/bug?id=b419a5ca95062664fe1a60b764621eb4526e2cd0 Reported-by:
<syzbot+7010af67ced6105e5ab6@syzkaller.appspotmail.com> Signed-off-by:
Vasily Averin <vvs@virtuozzo.com> Acked-by:
Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/1b2494af-2c56-8ee2-7bc0-923fcad1cdf8@virtuozzo.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Toke Høiland-Jørgensen authored
Jakub pointed out that the IP_ECN_set* helpers basically open-code csum16_add(), so let's switch them over to using the helper instead. v2: - Use __be16 for check_add stack variable in IP_ECN_set_ce() (kbot) v3: - Turns out we need __force casts to do arithmetic on __be16 types Reported-by:
Jakub Kicinski <kuba@kernel.org> Tested-by:
Jonathan Morton <chromatix99@gmail.com> Tested-by:
Pete Heist <pete@heistp.net> Signed-off-by:
Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/r/20201211142638.154780-1-toke@redhat.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Wang Hai authored
I got a warining report: br_sysfs_addbr: can't create group bridge4/bridge ------------[ cut here ]------------ sysfs group 'bridge' not found for kobject 'bridge4' WARNING: CPU: 2 PID: 9004 at fs/sysfs/group.c:279 sysfs_remove_group fs/sysfs/group.c:279 [inline] WARNING: CPU: 2 PID: 9004 at fs/sysfs/group.c:279 sysfs_remove_group+0x153/0x1b0 fs/sysfs/group.c:270 Modules linked in: iptable_nat ... Call Trace: br_dev_delete+0x112/0x190 net/bridge/br_if.c:384 br_dev_newlink net/bridge/br_netlink.c:1381 [inline] br_dev_newlink+0xdb/0x100 net/bridge/br_netlink.c:1362 __rtnl_newlink+0xe11/0x13f0 net/core/rtnetlink.c:3441 rtnl_newlink+0x64/0xa0 net/core/rtnetlink.c:3500 rtnetlink_rcv_msg+0x385/0x980 net/core/rtnetlink.c:5562 netlink_rcv_skb+0x134/0x3d0 net/netlink/af_netlink.c:2494 netlink_unicast_kernel net/netlink/af_netlink.c:1304 [inline] netlink_unicast+0x4a0/0x6a0 net/netlink/af_netlink.c:1330 netlink_sendmsg+0x793/0xc80 net/netlink/af_netlink.c:1919 sock_sendmsg_nosec net/socket.c:651 [inline] sock_sendmsg+0x139/0x170 net/socket.c:671 ____sys_sendmsg+0x658/0x7d0 net/socket.c:2353 ___sys_sendmsg+0xf8/0x170 net/socket.c:2407 __sys_sendmsg+0xd3/0x190 net/socket.c:2440 do_syscall_64+0x33/0x40 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 In br_device_event(), if the bridge sysfs fails to be added, br_device_event() should return error. This can prevent warining when removing bridge sysfs that do not exist. Fixes: bb900b27 ("bridge: allow creating bridge devices with netlink") Reported-by:
Hulk Robot <hulkci@huawei.com> Signed-off-by:
Wang Hai <wanghai38@huawei.com> Tested-by:
Nikolay Aleksandrov <nikolay@nvidia.com> Acked-by:
Nikolay Aleksandrov <nikolay@nvidia.com> Link: https://lore.kernel.org/r/20201211122921.40386-1-wanghai38@huawei.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Björn Töpel authored
Instead doing the check for allocation in each loop, move it outside the while loop and do it every NAPI loop. This change boosts the xdpsock rxdrop scenario with 15% more packets-per-second. Reviewed-by:
Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by:
Björn Töpel <bjorn.topel@intel.com> Link: https://lore.kernel.org/r/20201211085410.59350-1-bjorn.topel@gmail.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Zheng Yongjun authored
Simplify the return expression at mtk_eth_path.c file, simplify this all. Signed-off-by:
Zheng Yongjun <zhengyongjun3@huawei.com> Link: https://lore.kernel.org/r/20201211083801.1632-1-zhengyongjun3@huawei.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-