- Jun 09, 2020
-
-
Geliang Tang authored
In MPTCPOPT_RM_ADDR option parsing, the pointer "ptr" pointed to the "Subtype" octet, the pointer "ptr+1" pointed to the "Address ID" octet: +-------+-------+---------------+ |Subtype|(resvd)| Address ID | +-------+-------+---------------+ | | ptr ptr+1 We should set mp_opt->rm_id to the value of "ptr+1", not "ptr". This patch will fix this bug. Fixes: 3df523ab ("mptcp: Add ADD_ADDR handling") Signed-off-by:
Geliang Tang <geliangtang@gmail.com> Reviewed-by:
Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Arjun Roy authored
Use vm_insert_pages() for tcp receive zerocopy. Spin lock cycles (as reported by perf) drop from a couple of percentage points to a fraction of a percent. This results in a roughly 6% increase in efficiency, measured roughly as zerocopy receive count divided by CPU utilization. The intention of this patchset is to reduce atomic ops for tcp zerocopy receives, which normally hits the same spinlock multiple times consecutively. [akpm@linux-foundation.org: suppress gcc-7.2.0 warning] Link: http://lkml.kernel.org/r/20200128025958.43490-3-arjunroy.kdev@gmail.com Signed-off-by:
Arjun Roy <arjunroy@google.com> Signed-off-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
Soheil Hassas Yeganeh <soheil@google.com> Cc: David Miller <davem@davemloft.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jun 08, 2020
-
-
Flavio Suligoi authored
In the files: - net/mac80211/rx.c - net/wireless/Kconfig the wiki url is still the old "wireless.kernel.org" instead of the new "wireless.wiki.kernel.org" Signed-off-by:
Flavio Suligoi <f.suligoi@asem.it> Link: https://lore.kernel.org/r/20200605154112.16277-10-f.suligoi@asem.it Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
- Jun 05, 2020
-
-
Stefano Garzarella authored
Fix the following gcc-9.3 warning when building with 'make W=1': net/vmw_vsock/vmci_transport.c:2058:6: warning: no previous prototype for ‘vmci_vsock_transport_cb’ [-Wmissing-prototypes] 2058 | void vmci_vsock_transport_cb(bool is_host) | ^~~~~~~~~~~~~~~~~~~~~~~ Fixes: b1bba80a ("vsock/vmci: register vmci_transport only when VMCI guest/host are active") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Stefano Garzarella <sgarzare@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
This code generates a Smatch warning: net/ethtool/linkinfo.c:143 ethnl_set_linkinfo() warn: variable dereferenced before check 'info' (see line 119) Fortunately, the "info" pointer is never NULL so the check can be removed. Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by:
Michal Kubecek <mkubecek@suse.cz> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
David Howells authored
Under some circumstances, rxrpc will fail a transmit a packet through the underlying UDP socket (ie. UDP sendmsg returns an error). This may result in a call getting stuck. In the instance being seen, where AFS tries to send a probe to the Volume Location server, tracepoints show the UDP Tx failure (in this case returing error 99 EADDRNOTAVAIL) and then nothing more: afs_make_vl_call: c=0000015d VL.GetCapabilities rxrpc_call: c=0000015d NWc u=1 sp=rxrpc_kernel_begin_call+0x106/0x170 [rxrpc] a=00000000dd89ee8a rxrpc_call: c=0000015d Gus u=2 sp=rxrpc_new_client_call+0x14f/0x580 [rxrpc] a=00000000e20e4b08 rxrpc_call: c=0000015d SEE u=2 sp=rxrpc_activate_one_channel+0x7b/0x1c0 [rxrpc] a=00000000e20e4b08 rxrpc_call: c=0000015d CON u=2 sp=rxrpc_kernel_begin_call+0x106/0x170 [rxrpc] a=00000000e20e4b08 rxrpc_tx_fail: c=0000015d r=1 ret=-99 CallDataNofrag The problem is that if the initial packet fails and the retransmission timer hasn't been started, the call is set to completed and an error is returned from rxrpc_send_data_packet() to rxrpc_queue_packet(). Though rxrpc_instant_resend() is called, this does nothing because the call is marked completed. So rxrpc_notify_socket() isn't called and the error is passed back up to rxrpc_send_data(), rxrpc_kernel_send_data() and thence to afs_make_call() and afs_vl_get_capabilities() where it is simply ignored because it is assumed that the result of a probe will be collected asynchronously. Fileserver probing is similarly affected via afs_fs_get_capabilities(). Fix this by always issuing a notification in __rxrpc_set_call_completion() if it shifts a call to the completed state, even if an error is also returned to the caller through the function return value. Also put in a little bit of optimisation to avoid taking the call state_lock and disabling softirqs if the call is already in the completed state and remove some now redundant rxrpc_notify_socket() calls. Fixes: f5c17aae ("rxrpc: Calls should only have one terminal state") Reported-by:
Gerry Seidman <gerry@auristor.com> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Marc Dionne <marc.dionne@auristor.com>
-
David Howells authored
Move the handling of call completion out of line so that the next patch can add more code in that area. Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Marc Dionne <marc.dionne@auristor.com>
-
Johannes Berg authored
Dan points out that if ieee80211_chandef_he_6ghz_oper() succeeds, we don't initialize 'ret'. Initialize it to 0 in this case, since everything went fine and nothing has to be disabled. Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Fixes: 57fa5e85 ("mac80211: determine chandef from HE 6 GHz operation") Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Link: https://lore.kernel.org/r/20200603111500.bd2a5ff37b83.I2c3f338ce343b581db493eb9a0d988d1b626c8fb@changeid Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
Johannes Berg authored
Lockdep reports that we may deadlock because we take the RTNL on the work struct, but flush it under RTNL. Clearly, it's correct. In practice, this can happen when doing rfkill on an active device. Fix this by moving the work struct to the wiphy (registered dev) layer, and iterate over all the wdevs inside there. This then means we need to track which one of them has work to do, so we don't update to the driver for all wdevs all the time. Also fix a locking bug I noticed while working on this - the registrations list is iterated as if it was an RCU list, but it isn't handle that way - and we need to lock now for the update flag anyway, so remove the RCU. Fixes: 6cd536fe ("cfg80211: change internal management frame registration API") Reported-by:
Markus Theil <markus.theil@tu-ilmenau.de> Reported-and-tested-by:
Kenneth R. Crudup <kenny@panix.com> Signed-off-by:
Johannes Berg <johannes.berg@intel.com> Link: https://lore.kernel.org/r/20200604120420.b1dc540a7e26.I55dcca56bb5bdc5d7ad66a36a0b42afd7034d8be@changeid Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
- Jun 04, 2020
-
-
Pavel Machek authored
64bit division is kind of expensive, and shift should do the job here. Signed-off-by:
Pavel Machek (CIP) <pavel@denx.de> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
Clearing the 'inet_num' field is necessary and safe if and only if the socket is not bound. The MPTCP protocol calls the destroy helper on bound sockets, as tcp_v{4,6}_syn_recv_sock completed successfully. Move the clearing of such field out of the common code, otherwise the MPTCP MP_JOIN error path will find the wrong 'inet_num' value on socket disposal, __inet_put_port() will acquire the wrong lock and bind_node removal could race with other modifiers possibly corrupting the bind hash table. Reported-and-tested-by:
Christoph Paasch <cpaasch@apple.com> Fixes: 729cd643 ("mptcp: cope better with MP_JOIN failure") Signed-off-by:
Paolo Abeni <pabeni@redhat.com> Reviewed-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Ahmed S. Darwish authored
Sequence counters write paths are critical sections that must never be preempted, and blocking, even for CONFIG_PREEMPTION=n, is not allowed. Commit 5dbe7c17 ("net: fix kernel deadlock with interface rename and netdev name retrieval.") handled a deadlock, observed with CONFIG_PREEMPTION=n, where the devnet_rename seqcount read side was infinitely spinning: it got scheduled after the seqcount write side blocked inside its own critical section. To fix that deadlock, among other issues, the commit added a cond_resched() inside the read side section. While this will get the non-preemptible kernel eventually unstuck, the seqcount reader is fully exhausting its slice just spinning -- until TIF_NEED_RESCHED is set. The fix is also still broken: if the seqcount reader belongs to a real-time scheduling policy, it can spin forever and the kernel will livelock. Disabling preemption over the seqcount write side critical section will not work: inside it are a number of GFP_KERNEL allocations and mutex locking through the drivers/base/ :: device_rename() call chain. >From all the above, replace the seqcount with a rwsem. Fixes: 5dbe7c17 (net: fix kernel deadlock with interface rename and netdev name retrieval.) Fixes: 30e6c9fa (net: devnet_rename_seq should be a seqcount) Fixes: c91f6df2 (sockopt: Change getsockopt() of SO_BINDTODEVICE to return an interface name) Cc: <stable@vger.kernel.org> Reported-by: kbuild test robot <lkp@intel.com> [ v1 missing up_read() on error exit ] Reported-by: Dan Carpenter <dan.carpenter@oracle.com> [ v1 missing up_read() on error exit ] Signed-off-by:
Ahmed S. Darwish <a.darwish@linutronix.de> Reviewed-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Ahmed Abdelsalam authored
The seg6_validate_srh() is used to validate SRH for three cases: case1: SRH of data-plane SRv6 packets to be processed by the Linux kernel. Case2: SRH of the netlink message received from user-space (iproute2) Case3: SRH injected into packets through setsockopt In case1, the SRH can be encoded in the Reduced way (i.e., first SID is carried in DA only and not represented as SID in the SRH) and the seg6_validate_srh() now handles this case correctly. In case2 and case3, the SRH shouldn’t be encoded in the Reduced way otherwise we lose the first segment (i.e., the first hop). The current implementation of the seg6_validate_srh() allow SRH of case2 and case3 to be encoded in the Reduced way. This leads a slab-out-of-bounds problem. This patch verifies SRH of case1, case2 and case3. Allowing case1 to be reduced while preventing SRH of case2 and case3 from being reduced . Reported-by:
<syzbot+e8c028b62439eac42073@syzkaller.appspotmail.com> Reported-by:
YueHaibing <yuehaibing@huawei.com> Fixes: 0cb7498f ("seg6: fix SRH processing to comply with RFC8754") Signed-off-by:
Ahmed Abdelsalam <ahabdels@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Tuong Lien authored
syzbot found the following crash: general protection fault, probably for non-canonical address 0xdffffc0000000019: 0000 [#1] PREEMPT SMP KASAN KASAN: null-ptr-deref in range [0x00000000000000c8-0x00000000000000cf] CPU: 1 PID: 7060 Comm: syz-executor394 Not tainted 5.7.0-rc6-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:__tipc_sendstream+0xbde/0x11f0 net/tipc/socket.c:1591 Code: 00 00 00 00 48 39 5c 24 28 48 0f 44 d8 e8 fa 3e db f9 48 b8 00 00 00 00 00 fc ff df 48 8d bb c8 00 00 00 48 89 fa 48 c1 ea 03 <80> 3c 02 00 0f 85 e2 04 00 00 48 8b 9b c8 00 00 00 48 b8 00 00 00 RSP: 0018:ffffc90003ef7818 EFLAGS: 00010202 RAX: dffffc0000000000 RBX: 0000000000000000 RCX: ffffffff8797fd9d RDX: 0000000000000019 RSI: ffffffff8797fde6 RDI: 00000000000000c8 RBP: ffff888099848040 R08: ffff88809a5f6440 R09: fffffbfff1860b4c R10: ffffffff8c305a5f R11: fffffbfff1860b4b R12: ffff88809984857e R13: 0000000000000000 R14: ffff888086aa4000 R15: 0000000000000000 FS: 00000000009b4880(0000) GS:ffff8880ae700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020000140 CR3: 00000000a7fdf000 CR4: 00000000001406e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: tipc_sendstream+0x4c/0x70 net/tipc/socket.c:1533 sock_sendmsg_nosec net/socket.c:652 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:672 ____sys_sendmsg+0x32f/0x810 net/socket.c:2352 ___sys_sendmsg+0x100/0x170 net/socket.c:2406 __sys_sendmmsg+0x195/0x480 net/socket.c:2496 __do_sys_sendmmsg net/socket.c:2525 [inline] __se_sys_sendmmsg net/socket.c:2522 [inline] __x64_sys_sendmmsg+0x99/0x100 net/socket.c:2522 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295 entry_SYSCALL_64_after_hwframe+0x49/0xb3 RIP: 0033:0x440199 ... This bug was bisected to commit 0a3e060f ("tipc: add test for Nagle algorithm effectiveness"). However, it is not the case, the trouble was from the base in the case of zero data length message sending, we would unexpectedly make an empty 'txq' queue after the 'tipc_msg_append()' in Nagle mode. A similar crash can be generated even without the bisected patch but at the link layer when it accesses the empty queue. We solve the issues by building at least one buffer to go with socket's header and an optional data section that may be empty like what we had with the 'tipc_msg_build()'. Note: the previous commit 4c21daae ("tipc: Fix NULL pointer dereference in __tipc_sendstream()") is obsoleted by this one since the 'txq' will be never empty and the check of 'skb != NULL' is unnecessary but it is safe anyway. Reported-by:
<syzbot+8eac6d030e7807c21d32@syzkaller.appspotmail.com> Fixes: c0bceb97 ("tipc: add smart nagle feature") Acked-by:
Jon Maloy <jmaloy@redhat.com> Signed-off-by:
Tuong Lien <tuong.t.lien@dektech.com.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Cong Wang authored
There are two kinds of memory leaks in genl_family_rcv_msg_dumpit(): 1. Before we call ops->start(), whenever an error happens, we forget to free the memory allocated in genl_family_rcv_msg_dumpit(). 2. When ops->start() fails, the 'info' has been already installed on the per socket control block, so we should not free it here. More importantly, nlk->cb_running is still false at this point, so netlink_sock_destruct() cannot free it either. The first kind of memory leaks is easier to resolve, but the second one requires some deeper thoughts. After reviewing how netfilter handles this, the most elegant solution I find is just to use a similar way to allocate the memory, that is, moving memory allocations from caller into ops->start(). With this, we can solve both kinds of memory leaks: for 1), no memory allocation happens before ops->start(); for 2), ops->start() handles its own failures and 'info' is installed to the socket control block only when success. The only ugliness here is we have to pass all local variables on stack via a struct, but this is not hard to understand. Alternatively, we can introduce a ops->free() to solve this too, but it is overkill as only genetlink has this problem so far. Fixes: 1927f41a ("net: genetlink: introduce dump info struct to be available during dumpit op") Reported-by:
<syzbot+21f04f481f449c8db840@syzkaller.appspotmail.com> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com> Cc: Florian Westphal <fw@strlen.de> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Jiri Pirko <jiri@mellanox.com> Cc: YueHaibing <yuehaibing@huawei.com> Cc: Shaochun Chen <cscnull@gmail.com> Signed-off-by:
Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jun 02, 2020
-
-
Jason Gunthorpe authored
Now that FMR support is gone, this attribute can be deleted from all places. Link: https://lore.kernel.org/r/12-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com Reviewed-by:
Max Gurtovoy <maxg@mellanox.com> Reviewed-by:
Bernard Metzler <bmt@zurich.ibm.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com>
-
Max Gurtovoy authored
Use FRWR method for memory registration by default and remove the ancient and unsafe FMR method. Link: https://lore.kernel.org/r/3-v3-f58e6669d5d3+2cf-fmr_removal_jgg@mellanox.com Signed-off-by:
Max Gurtovoy <maxg@mellanox.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com>
-
Tuong Lien authored
This reverts commit 441870ee. Like the previous patch in this series, we revert the above commit that causes similar issues with the 'aead' object. Acked-by:
Jon Maloy <jmaloy@redhat.com> Signed-off-by:
Tuong Lien <tuong.t.lien@dektech.com.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Tuong Lien authored
This reverts commit de058420. There is no actual tipc_node refcnt leak as stated in the above commit. The refcnt is hold carefully for the case of an asynchronous decryption (i.e. -EINPROGRESS/-EBUSY and skb = NULL is returned), so that the node object cannot be freed in the meantime. The counter will be re-balanced when the operation's callback arrives with the decrypted buffer if any. In other cases, e.g. a synchronous crypto the counter will be decreased immediately when it is done. Now with that commit, a kernel panic will occur when there is no node found (i.e. n = NULL) in the 'tipc_rcv()' or a premature release of the node object. This commit solves the issues by reverting the said commit, but keeping one valid case that the 'skb_linearize()' is failed. Acked-by:
Jon Maloy <jmaloy@redhat.com> Signed-off-by:
Tuong Lien <tuong.t.lien@dektech.com.au> Tested-by:
Hoang Le <hoang.h.le@dektech.com.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
Add a bpf_csum_level() helper which BPF programs can use in combination with bpf_skb_adjust_room() when they pass in BPF_F_ADJ_ROOM_NO_CSUM_RESET flag to the latter to avoid falling back to CHECKSUM_NONE. The bpf_csum_level() allows to adjust CHECKSUM_UNNECESSARY skb->csum_levels via BPF_CSUM_LEVEL_{INC,DEC} which calls __skb_{incr,decr}_checksum_unnecessary() on the skb. The helper also allows a BPF_CSUM_LEVEL_RESET which sets the skb's csum to CHECKSUM_NONE as well as a BPF_CSUM_LEVEL_QUERY to just return the current level. Without this helper, there is no way to otherwise adjust the skb->csum_level. I did not add an extra dummy flags as there is plenty of free bitspace in level argument itself iff ever needed in future. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Reviewed-by:
Alan Maguire <alan.maguire@oracle.com> Acked-by:
Lorenz Bauer <lmb@cloudflare.com> Link: https://lore.kernel.org/bpf/279ae3717cb3d03c0ffeb511493c93c450a01e1a.1591108731.git.daniel@iogearbox.net
-
Daniel Borkmann authored
Lorenz recently reported: In our TC classifier cls_redirect [0], we use the following sequence of helper calls to decapsulate a GUE (basically IP + UDP + custom header) encapsulated packet: bpf_skb_adjust_room(skb, -encap_len, BPF_ADJ_ROOM_MAC, BPF_F_ADJ_ROOM_FIXED_GSO) bpf_redirect(skb->ifindex, BPF_F_INGRESS) It seems like some checksums of the inner headers are not validated in this case. For example, a TCP SYN packet with invalid TCP checksum is still accepted by the network stack and elicits a SYN ACK. [...] That is, we receive the following packet from the driver: | ETH | IP | UDP | GUE | IP | TCP | skb->ip_summed == CHECKSUM_UNNECESSARY ip_summed is CHECKSUM_UNNECESSARY because our NICs do rx checksum offloading. On this packet we run skb_adjust_room_mac(-encap_len), and get the following: | ETH | IP | TCP | skb->ip_summed == CHECKSUM_UNNECESSARY Note that ip_summed is still CHECKSUM_UNNECESSARY. After bpf_redirect()'ing into the ingress, we end up in tcp_v4_rcv(). There, skb_checksum_init() is turned into a no-op due to CHECKSUM_UNNECESSARY. The bpf_skb_adjust_room() helper is not aware of protocol specifics. Internally, it handles the CHECKSUM_COMPLETE case via skb_postpull_rcsum(), but that does not cover CHECKSUM_UNNECESSARY. In this case skb->csum_level of the original skb prior to bpf_skb_adjust_room() call was 0, that is, covering UDP. Right now there is no way to adjust the skb->csum_level. NICs that have checksum offload disabled (CHECKSUM_NONE) or that support CHECKSUM_COMPLETE are not affected. Use a safe default for CHECKSUM_UNNECESSARY by resetting to CHECKSUM_NONE and add a flag to the helper called BPF_F_ADJ_ROOM_NO_CSUM_RESET that allows users from opting out. Opting out is useful for the case where we don't remove/add full protocol headers, or for the case where a user wants to adjust the csum level manually e.g. through bpf_csum_level() helper that is added in subsequent patch. The bpf_skb_proto_{4_to_6,6_to_4}() for NAT64/46 translation from the BPF bpf_skb_change_proto() helper uses bpf_skb_net_hdr_{push,pop}() pair internally as well but doesn't change layers, only transitions between v4 to v6 and vice versa, therefore no adoption is required there. [0] https://lore.kernel.org/bpf/20200424185556.7358-1-lmb@cloudflare.com/ Fixes: 2be7e212 ("bpf: add bpf_skb_adjust_room helper") Reported-by:
Lorenz Bauer <lmb@cloudflare.com> Reported-by:
Alan Maguire <alan.maguire@oracle.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
Lorenz Bauer <lmb@cloudflare.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Reviewed-by:
Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/CACAyw9-uU_52esMd1JjuA80fRPHJv5vsSg8GnfW3t_qDU4aVKQ@mail.gmail.com/ Link: https://lore.kernel.org/bpf/11a90472e7cce83e76ddbfce81fdfce7bfc68808.1591108731.git.daniel@iogearbox.net
-
Christoph Hellwig authored
The pgprot argument to __vmalloc is always PAGE_KERNEL now, so remove it. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Michael Kelley <mikelley@microsoft.com> [hyperv] Acked-by: Gao Xiang <xiang@kernel.org> [erofs] Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Wei Liu <wei.liu@kernel.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200414131348.444715-22-hch@lst.de Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Hellwig authored
Switch all callers to map_kernel_range, which symmetric to the unmap side (as well as the _noflush versions). Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Gao Xiang <xiang@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Kelley <mikelley@microsoft.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Wei Liu <wei.liu@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200414131348.444715-17-hch@lst.de Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jun 01, 2020
-
-
Vinay Kumar Yadav authored
Extends support to IPv6 for Inline TLS server. Signed-off-by:
Vinay Kumar Yadav <vinay.yadav@chelsio.com> v1->v2: - cc'd tcp folks. v2->v3: - changed EXPORT_SYMBOL() to EXPORT_SYMBOL_GPL() Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Hangbin Liu authored
Socket option IPV6_ADDRFORM supports UDP/UDPLITE and TCP at present. Previously the checking logic looks like: if (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_UDPLITE) do_some_check; else if (sk->sk_protocol != IPPROTO_TCP) break; After commit b6f61189 ("ipv6: restrict IPV6_ADDRFORM operation"), TCP was blocked as the logic changed to: if (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_UDPLITE) do_some_check; else if (sk->sk_protocol == IPPROTO_TCP) do_some_check; break; else break; Then after commit 82c9ae44 ("ipv6: fix restrict IPV6_ADDRFORM operation") UDP/UDPLITE were blocked as the logic changed to: if (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_UDPLITE) do_some_check; if (sk->sk_protocol == IPPROTO_TCP) do_some_check; if (sk->sk_protocol != IPPROTO_TCP) break; Fix it by using Eric's code and simply remove the break in TCP check, which looks like: if (sk->sk_protocol == IPPROTO_UDP || sk->sk_protocol == IPPROTO_UDPLITE) do_some_check; else if (sk->sk_protocol == IPPROTO_TCP) do_some_check; else break; Fixes: 82c9ae44 ("ipv6: fix restrict IPV6_ADDRFORM operation") Signed-off-by:
Hangbin Liu <liuhangbin@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
YueHaibing authored
tipc_sendstream() may send zero length packet, then tipc_msg_append() do not alloc skb, skb_peek_tail() will get NULL, msg_set_ack_required will trigger NULL pointer dereference. Reported-by:
<syzbot+8eac6d030e7807c21d32@syzkaller.appspotmail.com> Fixes: 0a3e060f ("tipc: add test for Nagle algorithm effectiveness") Signed-off-by:
YueHaibing <yuehaibing@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Jakub Sitnicki authored
Move functions to manage BPF programs attached to netns that are not specific to flow dissector to a dedicated module named bpf/net_namespace.c. The set of functions will grow with the addition of bpf_link support for netns attached programs. This patch prepares ground by creating a place for it. This is a code move with no functional changes intended. Signed-off-by:
Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200531082846.2117903-4-jakub@cloudflare.com
-
Jakub Sitnicki authored
In order to: (1) attach more than one BPF program type to netns, or (2) support attaching BPF programs to netns with bpf_link, or (3) support multi-prog attach points for netns we will need to keep more state per netns than a single pointer like we have now for BPF flow dissector program. Prepare for the above by extracting netns_bpf that is part of struct net, for storing all state related to BPF programs attached to netns. Turn flow dissector callbacks for querying/attaching/detaching a program into generic ones that operate on netns_bpf. Next patch will move the generic callbacks into their own module. This is similar to how it is organized for cgroup with cgroup_bpf. Signed-off-by:
Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Cc: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20200531082846.2117903-3-jakub@cloudflare.com
-
Jakub Sitnicki authored
Split out the part of attach callback that happens with attach/detach lock acquired. This structures the prog attach callback in a way that opens up doors for moving the locking out of flow_dissector and into generic callbacks for attaching/detaching progs to netns in subsequent patches. Signed-off-by:
Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Reviewed-by:
Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20200531082846.2117903-2-jakub@cloudflare.com
-
Ferenc Fejes authored
Extending the supported sockopts in bpf_setsockopt with SO_BINDTODEVICE. We call sock_bindtoindex with parameter lock_sk = false in this context because we already owning the socket. Signed-off-by:
Ferenc Fejes <fejes@inf.elte.hu> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/4149e304867b8d5a606a305bc59e29b063e51f49.1590871065.git.fejes@inf.elte.hu
-
Ferenc Fejes authored
The sock_bindtoindex intended for kernel wide usage however it will lock the socket regardless of the context. This modification relax this behavior optionally: locking the socket will be optional by calling the sock_bindtoindex with lock_sk = true. The modification applied to all users of the sock_bindtoindex. Signed-off-by:
Ferenc Fejes <fejes@inf.elte.hu> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/bee6355da40d9e991b2f2d12b67d55ebb5f5b207.1590871065.git.fejes@inf.elte.hu
-
John Fastabend authored
KTLS uses a stream parser to collect TLS messages and send them to the upper layer tls receive handler. This ensures the tls receiver has a full TLS header to parse when it is run. However, when a socket has BPF_SK_SKB_STREAM_VERDICT program attached before KTLS is enabled we end up with two stream parsers running on the same socket. The result is both try to run on the same socket. First the KTLS stream parser runs and calls read_sock() which will tcp_read_sock which in turn calls tcp_rcv_skb(). This dequeues the skb from the sk_receive_queue. When this is done KTLS code then data_ready() callback which because we stacked KTLS on top of the bpf stream verdict program has been replaced with sk_psock_start_strp(). This will in turn kick the stream parser again and eventually do the same thing KTLS did above calling into tcp_rcv_skb() and dequeuing a skb from the sk_receive_queue. At this point the data stream is broke. Part of the stream was handled by the KTLS side some other bytes may have been handled by the BPF side. Generally this results in either missing data or more likely a "Bad Message" complaint from the kTLS receive handler as the BPF program steals some bytes meant to be in a TLS header and/or the TLS header length is no longer correct. We've already broke the idealized model where we can stack ULPs in any order with generic callbacks on the TX side to handle this. So in this patch we do the same thing but for RX side. We add a sk_psock_strp_enabled() helper so TLS can learn a BPF verdict program is running and add a tls_sw_has_ctx_rx() helper so BPF side can learn there is a TLS ULP on the socket. Then on BPF side we omit calling our stream parser to avoid breaking the data stream for the KTLS receiver. Then on the KTLS side we call BPF_SK_SKB_STREAM_VERDICT once the KTLS receiver is done with the packet but before it posts the msg to userspace. This gives us symmetry between the TX and RX halfs and IMO makes it usable again. On the TX side we process packets in this order BPF -> TLS -> TCP and on the receive side in the reverse order TCP -> TLS -> BPF. Discovered while testing OpenSSL 3.0 Alpha2.0 release. Fixes: d829e9c4 ("tls: convert to generic sk_msg interface") Signed-off-by:
John Fastabend <john.fastabend@gmail.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/159079361946.5745.605854335665044485.stgit@john-Precision-5820-Tower Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
John Fastabend authored
We will need this block of code called from tls context shortly lets refactor the redirect logic so its easy to use. This also cleans up the switch stmt so we have fewer fallthrough cases. No logic changes are intended. Fixes: d829e9c4 ("tls: convert to generic sk_msg interface") Signed-off-by:
John Fastabend <john.fastabend@gmail.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Reviewed-by:
Jakub Sitnicki <jakub@cloudflare.com> Acked-by:
Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/159079360110.5745.7024009076049029819.stgit@john-Precision-5820-Tower Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
David Ahern authored
Add xdp_txq_info as the Tx counterpart to xdp_rxq_info. At the moment only the device is added. Other fields (queue_index) can be added as use cases arise. >From a UAPI perspective, add egress_ifindex to xdp context for bpf programs to see the Tx device. Update the verifier to only allow accesses to egress_ifindex by XDP programs with BPF_XDP_DEVMAP expected attach type. Signed-off-by:
David Ahern <dsahern@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20200529220716.75383-4-dsahern@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
David Ahern authored
Add BPF_XDP_DEVMAP attach type for use with programs associated with a DEVMAP entry. Allow DEVMAPs to associate a program with a device entry by adding a bpf_prog.fd to 'struct bpf_devmap_val'. Values read show the program id, so the fd and id are a union. bpf programs can get access to the struct via vmlinux.h. The program associated with the fd must have type XDP with expected attach type BPF_XDP_DEVMAP. When a program is associated with a device index, the program is run on an XDP_REDIRECT and before the buffer is added to the per-cpu queue. At this point rxq data is still valid; the next patch adds tx device information allowing the prorgam to see both ingress and egress device indices. XDP generic is skb based and XDP programs do not work with skb's. Block the use case by walking maps used by a program that is to be attached via xdpgeneric and fail if any of them are DEVMAP / DEVMAP_HASH with Block attach of BPF_XDP_DEVMAP programs to devices. Signed-off-by:
David Ahern <dsahern@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20200529220716.75383-3-dsahern@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Amritha Nambiar authored
Add "rx_queue_mapping" to bpf_sock. This gives read access for the existing field (sk_rx_queue_mapping) of struct sock from bpf_sock. Semantics for the bpf_sock rx_queue_mapping access are similar to sk_rx_queue_get(), i.e the value NO_QUEUE_MAPPING is not allowed and -1 is returned in that case. This is useful for transmit queue selection based on the received queue index which is cached in the socket in the receive path. v3: Addressed review comments to add usecase in patch description, and fixed default value for rx_queue_mapping. v2: fixed build error for CONFIG_XPS wrapping, reported by kbuild test robot <lkp@intel.com> Signed-off-by:
Amritha Nambiar <amritha.nambiar@intel.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
John Fastabend authored
Add helpers to use local socket storage. Signed-off-by:
John Fastabend <john.fastabend@gmail.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/159033907577.12355.14740125020572756560.stgit@john-Precision-5820-Tower Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
John Fastabend authored
Add these generic helpers that may be useful to use from sk_msg programs. The helpers do not depend on ctx so we can simply add them here, BPF_FUNC_perf_event_output BPF_FUNC_get_current_uid_gid BPF_FUNC_get_current_pid_tgid BPF_FUNC_get_current_cgroup_id BPF_FUNC_get_current_ancestor_cgroup_id BPF_FUNC_get_cgroup_classid Signed-off-by:
John Fastabend <john.fastabend@gmail.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/159033903373.12355.15489763099696629346.stgit@john-Precision-5820-Tower Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Al Viro authored
no point getting compat_cmsghdr field-by-field Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Guillaume Nault authored
Compiling with W=1 gives the following warning: net/sched/cls_flower.c:731:1: warning: ‘mpls_opts_policy’ defined but not used [-Wunused-const-variable=] The TCA_FLOWER_KEY_MPLS_OPTS contains a list of TCA_FLOWER_KEY_MPLS_OPTS_LSE. Therefore, the attributes all have the same type and we can't parse the list with nla_parse*() and have the attributes validated automatically using an nla_policy. fl_set_key_mpls_opts() properly verifies that all attributes in the list are TCA_FLOWER_KEY_MPLS_OPTS_LSE. Then fl_set_key_mpls_lse() uses nla_parse_nested() on all these attributes, thus verifying that they have the NLA_F_NESTED flag. So we can safely drop the mpls_opts_policy. Reported-by:
kbuild test robot <lkp@intel.com> Signed-off-by:
Guillaume Nault <gnault@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-