Skip to content
  1. Apr 08, 2014
    • Daniel Borkmann's avatar
      net: sctp: wake up all assocs if sndbuf policy is per socket · 52c35bef
      Daniel Borkmann authored
      
      
      SCTP charges chunks for wmem accounting via skb->truesize in
      sctp_set_owner_w(), and sctp_wfree() respectively as the
      reverse operation. If a sender runs out of wmem, it needs to
      wait via sctp_wait_for_sndbuf(), and gets woken up by a call
      to __sctp_write_space() mostly via sctp_wfree().
      
      __sctp_write_space() is being called per association. Although
      we assign sk->sk_write_space() to sctp_write_space(), which
      is then being done per socket, it is only used if send space
      is increased per socket option (SO_SNDBUF), as SOCK_USE_WRITE_QUEUE
      is set and therefore not invoked in sock_wfree().
      
      Commit 4c3a5bda ("sctp: Don't charge for data in sndbuf
      again when transmitting packet") fixed an issue where in case
      sctp_packet_transmit() manages to queue up more than sndbuf
      bytes, sctp_wait_for_sndbuf() will never be woken up again
      unless it is interrupted by a signal. However, a still
      remaining issue is that if net.sctp.sndbuf_policy=0, that is
      accounting per socket, and one-to-many sockets are in use,
      the reclaimed write space from sctp_wfree() is 'unfairly'
      handed back on the server to the association that is the lucky
      one to be woken up again via __sctp_write_space(), while
      the remaining associations are never be woken up again
      (unless by a signal).
      
      The effect disappears with net.sctp.sndbuf_policy=1, that
      is wmem accounting per association, as it guarantees a fair
      share of wmem among associations.
      
      Therefore, if we have reclaimed memory in case of per socket
      accounting, wake all related associations to a socket in a
      fair manner, that is, traverse the socket association list
      starting from the current neighbour of the association and
      issue a __sctp_write_space() to everyone until we end up
      waking ourselves. This guarantees that no association is
      preferred over another and even if more associations are
      taken into the one-to-many session, all receivers will get
      messages from the server and are not stalled forever on
      high load. This setting still leaves the advantage of per
      socket accounting in touch as an association can still use
      up global limits if unused by others.
      
      Fixes: 4eb701df ("[SCTP] Fix SCTP sendbuffer accouting.")
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Vlad Yasevich <vyasevic@redhat.com>
      Acked-by: default avatarVlad Yasevich <vyasevic@redhat.com>
      Acked-by: default avatarNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      52c35bef
  2. Apr 07, 2014
    • Veaceslav Falico's avatar
      netdev: remove potentially harmful checks · 6859e7df
      Veaceslav Falico authored
      
      
      Currently we're checking a variable for != NULL after actually
      dereferencing it, in netdev_lower_get_next_private*().
      
      It's counter-intuitive at best, and can lead to faulty usage (as it implies
      that the variable can be NULL), so fix it by removing the useless checks.
      
      Reported-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      CC: "David S. Miller" <davem@davemloft.net>
      CC: Eric Dumazet <edumazet@google.com>
      CC: Nicolas Dichtel <nicolas.dichtel@6wind.com>
      CC: Jiri Pirko <jiri@resnulli.us>
      CC: stephen hemminger <stephen@networkplumber.org>
      CC: Jerry Chu <hkchu@google.com>
      Signed-off-by: default avatarVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6859e7df
    • Daniel Borkmann's avatar
      pktgen: fix xmit test for BQL enabled devices · 6f25cd47
      Daniel Borkmann authored
      
      
      Similarly as in commit 8e2f1a63 ("packet: fix packet_direct_xmit
      for BQL enabled drivers"), we test for __QUEUE_STATE_STACK_XOFF bit
      in pktgen's xmit, which would not fully fill the device's TX ring for
      BQL drivers that use netdev_tx_sent_queue(). Fix is to use, similarly
      as we do in packet sockets, netif_xmit_frozen_or_drv_stopped() test.
      
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6f25cd47
    • Geert Uytterhoeven's avatar
      tipc: Let tipc_release() return 0 · 065d7e39
      Geert Uytterhoeven authored
      
      
      net/tipc/socket.c: In function ‘tipc_release’:
      net/tipc/socket.c:352: warning: ‘res’ is used uninitialized in this function
      
      Introduced by commit 24be34b5 ("tipc:
      eliminate upcall function pointers between port and socket"), which
      removed the sole initializer of "res".
      
      Just return 0 to fix it.
      
      Signed-off-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      065d7e39
    • Jean Sacren's avatar
      mac802154: fix duplicate #include headers · 6c6a9855
      Jean Sacren authored
      
      
      The commit e6278d92 ("mac802154: use header operations to
      create/parse headers") included the header
      
      		net/ieee802154_netdev.h
      
      which had been included by the commit b70ab2e8 ("ieee802154:
      enforce consistent endianness in the 802.15.4 stack"). Fix this
      duplicate #include by deleting the latter one as the required header
      has already been in place.
      
      Signed-off-by: default avatarJean Sacren <sakiwit@gmail.com>
      Cc: Alexander Smirnov <alex.bluesman.smirnov@gmail.com>
      Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
      Cc: Phoebe Buckheister <phoebe.buckheister@itwm.fraunhofer.de>
      Cc: linux-zigbee-devel@lists.sourceforge.net
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6c6a9855
    • Daniel Borkmann's avatar
      net: filter: be more defensive on div/mod by X==0 · 5f9fde5f
      Daniel Borkmann authored
      
      
      The old interpreter behaviour was that we returned with 0
      whenever we found a division by 0 would take place. In the new
      interpreter we would currently just skip that instead and
      continue execution.
      
      It's true that a value of 0 as return might not be appropriate
      in all cases, but current users (socket filters -> drop
      packet, seccomp -> SECCOMP_RET_KILL, cls_bpf -> unclassified,
      etc) seem fine with that behaviour. Better this than undefined
      BPF program behaviour as it's expected that A contains the
      result of the division. In future, as more use cases open up,
      we could further adapt this return value to our needs, if
      necessary.
      
      So reintroduce return of 0 for division by 0 as in the old
      interpreter. Also in case of K which is guaranteed to be 32bit
      wide, sk_chk_filter() already takes care of preventing division
      by 0 invoked through K, so we can generally spare us these tests.
      
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Reviewed-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5f9fde5f
  3. Apr 05, 2014
    • Thomas Graf's avatar
      netfilter: Can't fail and free after table replacement · c58dd2dd
      Thomas Graf authored
      
      
      All xtables variants suffer from the defect that the copy_to_user()
      to copy the counters to user memory may fail after the table has
      already been exchanged and thus exposed. Return an error at this
      point will result in freeing the already exposed table. Any
      subsequent packet processing will result in a kernel panic.
      
      We can't copy the counters before exposing the new tables as we
      want provide the counter state after the old table has been
      unhooked. Therefore convert this into a silent error.
      
      Cc: Florian Westphal <fw@strlen.de>
      Signed-off-by: default avatarThomas Graf <tgraf@suug.ch>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      c58dd2dd
  4. Apr 03, 2014
  5. Apr 01, 2014
  6. Mar 31, 2014
    • Hannes Frederic Sowa's avatar
      ipv6: some ipv6 statistic counters failed to disable bh · 43a43b60
      Hannes Frederic Sowa authored
      
      
      After commit c15b1cca ("ipv6: move DAD and addrconf_verify
      processing to workqueue") some counters are now updated in process context
      and thus need to disable bh before doing so, otherwise deadlocks can
      happen on 32-bit archs. Fabio Estevam noticed this while while mounting
      a NFS volume on an ARM board.
      
      As a compensation for missing this I looked after the other *_STATS_BH
      and found three other calls which need updating:
      
      1) icmp6_send: ip6_fragment -> icmpv6_send -> icmp6_send (error handling)
      2) ip6_push_pending_frames: rawv6_sendmsg -> rawv6_push_pending_frames -> ...
         (only in case of icmp protocol with raw sockets in error handling)
      3) ping6_v6_sendmsg (error handling)
      
      Fixes: c15b1cca ("ipv6: move DAD and addrconf_verify processing to workqueue")
      Reported-by: default avatarFabio Estevam <festevam@gmail.com>
      Tested-by: default avatarFabio Estevam <fabio.estevam@freescale.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      43a43b60
    • Hannes Frederic Sowa's avatar
      ipv6: strengthen fallback fragmentation id generation · 6dfac5c3
      Hannes Frederic Sowa authored
      
      
      First off, we don't need to check for non-NULL rt any more, as we are
      guaranteed to always get a valid rt6_info. Drop the check.
      
      In case we couldn't allocate an inet_peer for fragmentation information
      we currently generate strictly incrementing fragmentation ids for all
      destination. This is done to maximize the cycle and avoid collisions.
      
      Those fragmentation ids are very predictable. At least we should try to
      mix in the destination address.
      
      While it should make no difference to simply use a PRNG at this point,
      secure_ipv6_id ensures that we don't leak information from prandom,
      so its internal state could be recoverable.
      
      This fallback function should normally not get used thus this should
      not affect performance at all. It is just meant as a safety net.
      
      Signed-off-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6dfac5c3
    • Eric Dumazet's avatar
      net-gro: restore frag0 optimization · a50e233c
      Eric Dumazet authored
      
      
      Main difference between napi_frags_skb() and napi_gro_receive() is that
      the later is called while ethernet header was already pulled by the NIC
      driver (eth_type_trans() was called before napi_gro_receive())
      
      Jerry Chu in commit 299603e8 ("net-gro: Prepare GRO stack for the
      upcoming tunneling support") tried to remove this difference by calling
      eth_type_trans() from napi_frags_skb() instead of doing this later from
      napi_frags_finish()
      
      Goal was that napi_gro_complete() could call
      ptype->callbacks.gro_complete(skb, 0)  (offset of first network header =
      0)
      
      Also, xxx_gro_receive() handlers all use off = skb_gro_offset(skb) to
      point to their own header, for the current skb and ones held in gro_list
      
      Problem is this cleanup work defeated the frag0 optimization:
      It turns out the consecutive pskb_may_pull() calls are too expensive.
      
      This patch brings back the frag0 stuff in napi_frags_skb().
      
      As all skb have their mac header in skb head, we no longer need
      skb_gro_mac_header()
      
      Reported-by: default avatarMichal Schmidt <mschmidt@redhat.com>
      Fixes: 299603e8 ("net-gro: Prepare GRO stack for the upcoming tunneling support")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Jerry Chu <hkchu@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a50e233c
    • Sasha Levin's avatar
      rds: prevent dereference of a NULL device in rds_iw_laddr_check · bf39b424
      Sasha Levin authored
      
      
      Binding might result in a NULL device which is later dereferenced
      without checking.
      
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bf39b424
    • david decotigny's avatar
      net-sysfs: expose number of carrier on/off changes · 2d3b479d
      david decotigny authored
      
      
      This allows to monitor carrier on/off transitions and detect link
      flapping issues:
       - new /sys/class/net/X/carrier_changes
       - new rtnetlink IFLA_CARRIER_CHANGES (getlink)
      
      Tested:
        - grep . /sys/class/net/*/carrier_changes
          + ip link set dev X down/up
          + plug/unplug cable
        - updated iproute2: prints IFLA_CARRIER_CHANGES
        - iproute2 20121211-2 (debian): unchanged behavior
      
      Signed-off-by: default avatarDavid Decotigny <decot@googlers.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2d3b479d
    • Wang Yufen's avatar
      ipv6: tcp_ipv6 policy route issue · 9c76a114
      Wang Yufen authored
      
      
      The issue raises when adding policy route, specify a particular
      NIC as oif, the policy route did not take effect. The reason is
      that fl6.oif is not set and route map failed. From the
      tcp_v6_send_response function, if the binding address is linklocal,
      fl6.oif is set, but not for global address.
      
      Acked-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: default avatarWang Yufen <wangyufen@huawei.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9c76a114
    • Wang Yufen's avatar
      ipv6: reuse rt6_need_strict · 60ea37f7
      Wang Yufen authored
      
      
      Move the whole rt6_need_strict as static inline into ip6_route.h,
      so that it can be reused
      
      Signed-off-by: default avatarWang Yufen <wangyufen@huawei.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      60ea37f7
    • Wang Yufen's avatar
      ipv6: tcp_ipv6 do some cleanup · 4aa956d8
      Wang Yufen authored
      
      
      Signed-off-by: default avatarWang Yufen <wangyufen@huawei.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4aa956d8
    • Vlad Yasevich's avatar
      bridge: use is_skb_forwardable in forward path · f6367b46
      Vlad Yasevich authored
      
      
      Use existing function instead of trying to use our own.
      
      Signed-off-by: default avatarVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f6367b46
    • Vlad Yasevich's avatar
      1ee481fb
    • Alexei Starovoitov's avatar
      net: filter: rework/optimize internal BPF interpreter's instruction set · bd4cf0ed
      Alexei Starovoitov authored
      
      
      This patch replaces/reworks the kernel-internal BPF interpreter with
      an optimized BPF instruction set format that is modelled closer to
      mimic native instruction sets and is designed to be JITed with one to
      one mapping. Thus, the new interpreter is noticeably faster than the
      current implementation of sk_run_filter(); mainly for two reasons:
      
      1. Fall-through jumps:
      
        BPF jump instructions are forced to go either 'true' or 'false'
        branch which causes branch-miss penalty. The new BPF jump
        instructions have only one branch and fall-through otherwise,
        which fits the CPU branch predictor logic better. `perf stat`
        shows drastic difference for branch-misses between the old and
        new code.
      
      2. Jump-threaded implementation of interpreter vs switch
         statement:
      
        Instead of single table-jump at the top of 'switch' statement,
        gcc will now generate multiple table-jump instructions, which
        helps CPU branch predictor logic.
      
      Note that the verification of filters is still being done through
      sk_chk_filter() in classical BPF format, so filters from user- or
      kernel space are verified in the same way as we do now, and same
      restrictions/constraints hold as well.
      
      We reuse current BPF JIT compilers in a way that this upgrade would
      even be fine as is, but nevertheless allows for a successive upgrade
      of BPF JIT compilers to the new format.
      
      The internal instruction set migration is being done after the
      probing for JIT compilation, so in case JIT compilers are able to
      create a native opcode image, we're going to use that, and in all
      other cases we're doing a follow-up migration of the BPF program's
      instruction set, so that it can be transparently run in the new
      interpreter.
      
      In short, the *internal* format extends BPF in the following way (more
      details can be taken from the appended documentation):
      
        - Number of registers increase from 2 to 10
        - Register width increases from 32-bit to 64-bit
        - Conditional jt/jf targets replaced with jt/fall-through
        - Adds signed > and >= insns
        - 16 4-byte stack slots for register spill-fill replaced
          with up to 512 bytes of multi-use stack space
        - Introduction of bpf_call insn and register passing convention
          for zero overhead calls from/to other kernel functions
        - Adds arithmetic right shift and endianness conversion insns
        - Adds atomic_add insn
        - Old tax/txa insns are replaced with 'mov dst,src' insn
      
      Performance of two BPF filters generated by libpcap resp. bpf_asm
      was measured on x86_64, i386 and arm32 (other libpcap programs
      have similar performance differences):
      
      fprog #1 is taken from Documentation/networking/filter.txt:
      tcpdump -i eth0 port 22 -dd
      
      fprog #2 is taken from 'man tcpdump':
      tcpdump -i eth0 'tcp port 22 and (((ip[2:2] - ((ip[0]&0xf)<<2)) -
         ((tcp[12]&0xf0)>>2)) != 0)' -dd
      
      Raw performance data from BPF micro-benchmark: SK_RUN_FILTER on the
      same SKB (cache-hit) or 10k SKBs (cache-miss); time in ns per call,
      smaller is better:
      
      --x86_64--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF      90       101        192       202
      new BPF      31        71         47        97
      old BPF jit  12        34         17        44
      new BPF jit TBD
      
      --i386--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF     107       136        227       252
      new BPF      40       119         69       172
      
      --arm32--
               fprog #1  fprog #1   fprog #2  fprog #2
               cache-hit cache-miss cache-hit cache-miss
      old BPF     202       300        475       540
      new BPF     180       270        330       470
      old BPF jit  26       182         37       202
      new BPF jit TBD
      
      Thus, without changing any userland BPF filters, applications on
      top of AF_PACKET (or other families) such as libpcap/tcpdump, cls_bpf
      classifier, netfilter's xt_bpf, team driver's load-balancing mode,
      and many more will have better interpreter filtering performance.
      
      While we are replacing the internal BPF interpreter, we also need
      to convert seccomp BPF in the same step to make use of the new
      internal structure since it makes use of lower-level API details
      without being further decoupled through higher-level calls like
      sk_unattached_filter_{create,destroy}(), for example.
      
      Just as for normal socket filtering, also seccomp BPF experiences
      a time-to-verdict speedup:
      
      05-sim-long_jumps.c of libseccomp was used as micro-benchmark:
      
        seccomp_rule_add_exact(ctx,...
        seccomp_rule_add_exact(ctx,...
      
        rc = seccomp_load(ctx);
      
        for (i = 0; i < 10000000; i++)
           syscall(199, 100);
      
      'short filter' has 2 rules
      'large filter' has 200 rules
      
      'short filter' performance is slightly better on x86_64/i386/arm32
      'large filter' is much faster on x86_64 and i386 and shows no
                     difference on arm32
      
      --x86_64-- short filter
      old BPF: 2.7 sec
       39.12%  bench  libc-2.15.so       [.] syscall
        8.10%  bench  [kernel.kallsyms]  [k] sk_run_filter
        6.31%  bench  [kernel.kallsyms]  [k] system_call
        5.59%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
        4.37%  bench  [kernel.kallsyms]  [k] trace_hardirqs_off_caller
        3.70%  bench  [kernel.kallsyms]  [k] __secure_computing
        3.67%  bench  [kernel.kallsyms]  [k] lock_is_held
        3.03%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
      new BPF: 2.58 sec
       42.05%  bench  libc-2.15.so       [.] syscall
        6.91%  bench  [kernel.kallsyms]  [k] system_call
        6.25%  bench  [kernel.kallsyms]  [k] trace_hardirqs_on_caller
        6.07%  bench  [kernel.kallsyms]  [k] __secure_computing
        5.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
      
      --arm32-- short filter
      old BPF: 4.0 sec
       39.92%  bench  [kernel.kallsyms]  [k] vector_swi
       16.60%  bench  [kernel.kallsyms]  [k] sk_run_filter
       14.66%  bench  libc-2.17.so       [.] syscall
        5.42%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
        5.10%  bench  [kernel.kallsyms]  [k] __secure_computing
      new BPF: 3.7 sec
       35.93%  bench  [kernel.kallsyms]  [k] vector_swi
       21.89%  bench  libc-2.17.so       [.] syscall
       13.45%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
        6.25%  bench  [kernel.kallsyms]  [k] __secure_computing
        3.96%  bench  [kernel.kallsyms]  [k] syscall_trace_exit
      
      --x86_64-- large filter
      old BPF: 8.6 seconds
          73.38%    bench  [kernel.kallsyms]  [k] sk_run_filter
          10.70%    bench  libc-2.15.so       [.] syscall
           5.09%    bench  [kernel.kallsyms]  [k] seccomp_bpf_load
           1.97%    bench  [kernel.kallsyms]  [k] system_call
      new BPF: 5.7 seconds
          66.20%    bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
          16.75%    bench  libc-2.15.so       [.] syscall
           3.31%    bench  [kernel.kallsyms]  [k] system_call
           2.88%    bench  [kernel.kallsyms]  [k] __secure_computing
      
      --i386-- large filter
      old BPF: 5.4 sec
      new BPF: 3.8 sec
      
      --arm32-- large filter
      old BPF: 13.5 sec
       73.88%  bench  [kernel.kallsyms]  [k] sk_run_filter
       10.29%  bench  [kernel.kallsyms]  [k] vector_swi
        6.46%  bench  libc-2.17.so       [.] syscall
        2.94%  bench  [kernel.kallsyms]  [k] seccomp_bpf_load
        1.19%  bench  [kernel.kallsyms]  [k] __secure_computing
        0.87%  bench  [kernel.kallsyms]  [k] sys_getuid
      new BPF: 13.5 sec
       76.08%  bench  [kernel.kallsyms]  [k] sk_run_filter_int_seccomp
       10.98%  bench  [kernel.kallsyms]  [k] vector_swi
        5.87%  bench  libc-2.17.so       [.] syscall
        1.77%  bench  [kernel.kallsyms]  [k] __secure_computing
        0.93%  bench  [kernel.kallsyms]  [k] sys_getuid
      
      BPF filters generated by seccomp are very branchy, so the new
      internal BPF performance is better than the old one. Performance
      gains will be even higher when BPF JIT is committed for the
      new structure, which is planned in future work (as successive
      JIT migrations).
      
      BPF has also been stress-tested with trinity's BPF fuzzer.
      
      Joint work with Daniel Borkmann.
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Paul Moore <pmoore@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      Cc: linux-kernel@vger.kernel.org
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bd4cf0ed
    • Daniel Borkmann's avatar
      net: ptp: do not reimplement PTP/BPF classifier · 164d8c66
      Daniel Borkmann authored
      
      
      There are currently pch_gbe, cpts, and ixp4xx_eth drivers that open-code
      and reimplement a BPF classifier for the PTP protocol. Since all of them
      effectively do the very same thing and load the very same PTP/BPF filter,
      we can just consolidate that code by introducing ptp_classify_raw() in
      the time-stamping core framework which can be used in drivers.
      
      As drivers get initialized after bootstrapping the core networking
      subsystem, they can make use of ptp_insns wrapped through
      ptp_classify_raw(), which allows to simplify and remove PTP classifier
      setup code in drivers.
      
      Joint work with Alexei Starovoitov.
      
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Cc: Richard Cochran <richard.cochran@omicron.at>
      Cc: Jiri Benc <jbenc@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      164d8c66
    • Daniel Borkmann's avatar
      net: ptp: use sk_unattached_filter_create() for BPF · e62d2df0
      Daniel Borkmann authored
      
      
      This patch migrates an open-coded sk_run_filter() implementation with
      proper use of the BPF API, that is, sk_unattached_filter_create(). This
      migration is needed, as we will be internally transforming the filter
      to a different representation, and therefore needs to be decoupled.
      
      It is okay to do so as skb_timestamping_init() is called during
      initialization of the network stack in core initcall via sock_init().
      This would effectively also allow for PTP filters to be jit compiled if
      bpf_jit_enable is set.
      
      For better readability, there are also some newlines introduced, also
      ptp_classify.h is only in kernel space.
      
      Joint work with Alexei Starovoitov.
      
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Cc: Richard Cochran <richard.cochran@omicron.at>
      Cc: Jiri Benc <jbenc@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e62d2df0
    • Daniel Borkmann's avatar
      net: filter: move filter accounting to filter core · fbc907f0
      Daniel Borkmann authored
      
      
      This patch basically does two things, i) removes the extern keyword
      from the include/linux/filter.h file to be more consistent with the
      rest of Joe's changes, and ii) moves filter accounting into the filter
      core framework.
      
      Filter accounting mainly done through sk_filter_{un,}charge() take
      care of the case when sockets are being cloned through sk_clone_lock()
      so that removal of the filter on one socket won't result in eviction
      as it's still referenced by the other.
      
      These functions actually belong to net/core/filter.c and not
      include/net/sock.h as we want to keep all that in a central place.
      It's also not in fast-path so uninlining them is fine and even allows
      us to get rd of sk_filter_release_rcu()'s EXPORT_SYMBOL and a forward
      declaration.
      
      Joint work with Alexei Starovoitov.
      
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fbc907f0
    • Daniel Borkmann's avatar
      net: filter: keep original BPF program around · a3ea269b
      Daniel Borkmann authored
      
      
      In order to open up the possibility to internally transform a BPF program
      into an alternative and possibly non-trivial reversible representation, we
      need to keep the original BPF program around, so that it can be passed back
      to user space w/o the need of a complex decoder.
      
      The reason for that use case resides in commit a8fc9277 ("sk-filter:
      Add ability to get socket filter program (v2)"), that is, the ability
      to retrieve the currently attached BPF filter from a given socket used
      mainly by the checkpoint-restore project, for example.
      
      Therefore, we add two helpers sk_{store,release}_orig_filter for taking
      care of that. In the sk_unattached_filter_create() case, there's no such
      possibility/requirement to retrieve a loaded BPF program. Therefore, we
      can spare us the work in that case.
      
      This approach will simplify and slightly speed up both, sk_get_filter()
      and sock_diag_put_filterinfo() handlers as we won't need to successively
      decode filters anymore through sk_decode_filter(). As we still need
      sk_decode_filter() later on, we're keeping it around.
      
      Joint work with Alexei Starovoitov.
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a3ea269b
    • Daniel Borkmann's avatar
      net: filter: add jited flag to indicate jit compiled filters · f8bbbfc3
      Daniel Borkmann authored
      
      
      This patch adds a jited flag into sk_filter struct in order to indicate
      whether a filter is currently jited or not. The size of sk_filter is
      not being expanded as the 32 bit 'len' member allows upper bits to be
      reused since a filter can currently only grow as large as BPF_MAXINSNS.
      
      Therefore, there's enough room also for other in future needed flags to
      reuse 'len' field if necessary. The jited flag also allows for having
      alternative interpreter functions running as currently, we can only
      detect jit compiled filters by testing fp->bpf_func to not equal the
      address of sk_run_filter().
      
      Joint work with Alexei Starovoitov.
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Cc: Pablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f8bbbfc3
  7. Mar 29, 2014
Loading