[RFC PATCH 3/7] trace: Optimize trace_get_context_bit()
From: Daniel Bristot de Oliveira
Date: Tue Apr 02 2019 - 16:04:25 EST
trace_get_context_bit() and trace_recursive_lock() uses the same logic,
but the second reads the per_cpu variable only once.
Uses the trace_recursive_lock()'s logic in trace_get_context_bit().
Signed-off-by: Daniel Bristot de Oliveira <bristot@xxxxxxxxxx>
Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
Cc: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
Cc: "Joel Fernandes (Google)" <joel@xxxxxxxxxxxxxxxxx>
Cc: Jiri Olsa <jolsa@xxxxxxxxxx>
Cc: Namhyung Kim <namhyung@xxxxxxxxxx>
Cc: Alexander Shishkin <alexander.shishkin@xxxxxxxxxxxxxxx>
Cc: Tommaso Cucinotta <tommaso.cucinotta@xxxxxxxxxxxxxxx>
Cc: Romulo Silva de Oliveira <romulo.deoliveira@xxxxxxx>
Cc: Clark Williams <williams@xxxxxxxxxx>
Cc: linux-kernel@xxxxxxxxxxxxxxx
Cc: x86@xxxxxxxxxx
---
kernel/trace/trace.h | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index dad2f0cd7208..09318748fab8 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -635,20 +635,13 @@ enum {
static __always_inline int trace_get_context_bit(void)
{
- int bit;
-
- if (in_interrupt()) {
- if (in_nmi())
- bit = TRACE_CTX_NMI;
+ unsigned long pc = preempt_count();
- else if (in_irq())
- bit = TRACE_CTX_IRQ;
- else
- bit = TRACE_CTX_SOFTIRQ;
- } else
- bit = TRACE_CTX_NORMAL;
-
- return bit;
+ if (pc & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET))
+ return pc & NMI_MASK ? TRACE_CTX_NMI :
+ pc & HARDIRQ_MASK ? TRACE_CTX_IRQ : TRACE_CTX_SOFTIRQ;
+ else
+ return TRACE_CTX_NORMAL;
}
static __always_inline int trace_test_and_set_recursion(int start, int max)
--
2.20.1