Skip to content

elasticsearchexporter: Introduce LRU cache for profiles #38606

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 36 commits into from
Apr 17, 2025
Merged
Show file tree
Hide file tree
Changes from 26 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
0d7d076
introduce lru cache for profiles
dmathieu Mar 13, 2025
e663670
add changelog entry
dmathieu Mar 13, 2025
fdcebae
Merge branch 'main' into profiles-cache
dmathieu Mar 13, 2025
2cb32a0
Merge branch 'main' into profiles-cache
dmathieu Mar 26, 2025
6b02110
move the LRU to the serializer
dmathieu Mar 26, 2025
18755fb
take cache lifetime into account in lru
dmathieu Mar 26, 2025
a848fc1
simplify lruset to not expose freelru
dmathieu Mar 26, 2025
8a93a3f
Merge branch 'main' into profiles-cache
dmathieu Mar 28, 2025
55998a9
cache unsymbolized frames and executables too
dmathieu Mar 28, 2025
f263b61
Update exporter/elasticsearchexporter/internal/lru/lruset.go
dmathieu Mar 28, 2025
9c10cd5
Update exporter/elasticsearchexporter/internal/lru/lruset.go
dmathieu Mar 28, 2025
2b4184f
fix errors
dmathieu Mar 28, 2025
2ad815b
rename excluded to lru
dmathieu Mar 28, 2025
ed5bdc2
don't reference kib when it's docs count
dmathieu Mar 28, 2025
df27a37
Update exporter/elasticsearchexporter/internal/serializer/otelseriali…
dmathieu Apr 8, 2025
988758f
Update exporter/elasticsearchexporter/internal/serializer/otelseriali…
dmathieu Apr 8, 2025
d4cf4ca
add a benchmark for serialize profiles
dmathieu Apr 8, 2025
7ed361b
lock the lrus individually
dmathieu Apr 8, 2025
57ac2c0
Merge branch 'main' into profiles-cache
dmathieu Apr 8, 2025
6c95df8
fix expected data order
dmathieu Apr 8, 2025
1aa5864
fix lint
dmathieu Apr 8, 2025
62c6ff3
Merge branch 'main' into profiles-cache
dmathieu Apr 11, 2025
d37dc4d
only create the lrus if we need them
dmathieu Apr 11, 2025
00ed712
use sync.Once rather than a mutex
dmathieu Apr 11, 2025
d945545
don't reuse the global error
dmathieu Apr 11, 2025
d52eedb
Merge branch 'main' into profiles-cache
dmathieu Apr 11, 2025
401b336
if loading the LRUs fails, we also want to fail subsequent serializat…
dmathieu Apr 11, 2025
60dd3e4
Merge branch 'main' into profiles-cache
dmathieu Apr 15, 2025
18babf8
use the pprofiletest package
dmathieu Apr 15, 2025
3f52cdc
Merge branch 'main' into profiles-cache
dmathieu Apr 15, 2025
4001866
run go mod tidy
dmathieu Apr 15, 2025
9747793
use keyed fields
dmathieu Apr 15, 2025
d6365d9
Merge branch 'main' into profiles-cache
dmathieu Apr 16, 2025
3ab0f03
return an error rather than defining a nil lru set
dmathieu Apr 16, 2025
74dd183
remov the nil lruset check entirely
dmathieu Apr 16, 2025
bbbdaa4
Merge branch 'main' into profiles-cache
dmathieu Apr 16, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions .chloggen/elasticsearch-lru.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Use this changelog template to create an entry for release notes.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: enhancement

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: elasticsearchexporter

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Introduce LRU cache for profiles

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [38606]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:

# If your change doesn't affect end users or the exported elements of any package,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.
# Optional: The change log or logs in which this entry should be included.
# e.g. '[user]' or '[user, api]'
# Include 'user' if the change is relevant to end users.
# Include 'api' if there is a change to a library API.
# Default: '[user]'
change_logs: [user]
3 changes: 2 additions & 1 deletion exporter/elasticsearchexporter/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@ go 1.23.0

require (
github.com/cenkalti/backoff/v4 v4.3.0
github.com/cespare/xxhash v1.1.0
github.com/elastic/go-docappender/v2 v2.9.0
github.com/elastic/go-elasticsearch/v8 v8.17.1
github.com/elastic/go-freelru v0.16.0
github.com/elastic/go-structform v0.0.12
github.com/klauspost/compress v1.18.0
github.com/lestrrat-go/strftime v1.1.0
Expand Down Expand Up @@ -44,7 +46,6 @@ require (
github.com/cilium/ebpf v0.16.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/elastic/elastic-transport-go/v8 v8.6.1 // indirect
github.com/elastic/go-freelru v0.16.0 // indirect
github.com/elastic/go-sysinfo v1.15.3 // indirect
github.com/elastic/go-windows v1.0.2 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
Expand Down
6 changes: 6 additions & 0 deletions exporter/elasticsearchexporter/go.sum

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions exporter/elasticsearchexporter/integrationtest/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ require (
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.2 // indirect
github.com/census-instrumentation/opencensus-proto v0.4.1 // indirect
github.com/cespare/xxhash v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cilium/ebpf v0.16.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
Expand Down
6 changes: 6 additions & 0 deletions exporter/elasticsearchexporter/integrationtest/go.sum

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

74 changes: 74 additions & 0 deletions exporter/elasticsearchexporter/internal/lru/lruset.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0

package lru // import "github.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticsearchexporter/internal/lru"

import (
"time"

"github.com/cespare/xxhash"
"github.com/elastic/go-freelru"
"go.opentelemetry.io/ebpf-profiler/libpf/xsync"
)

type void struct{}

func stringHashFn(s string) uint32 {
return uint32(xxhash.Sum64String(s))
}

// LockedLRUSet is the interface provided to the LRUSet once a lock has been
// acquired.
type LockedLRUSet interface {
// CheckAndAdd checks whether the entry is already stored in the cache, and
// adds it.
// It returns whether the entry should be excluded, as it was already present
// in cache.
CheckAndAdd(entry string) bool
}

// LRUSet is an LRU cache implementation that allows acquiring a lock, and
// checking whether specific keys have already been stored.
type LRUSet struct {
syncMu *xsync.RWMutex[*freelru.LRU[string, void]]
}

func (l *LRUSet) WithLock(fn func(LockedLRUSet) error) error {
if l == nil || l.syncMu == nil {
return fn(nilLockedLRUSet{})
}

lru := l.syncMu.WLock()
defer l.syncMu.WUnlock(&lru)

return fn(lockedLRUSet{*lru})
}

func NewLRUSet(size uint32, rollover time.Duration) (*LRUSet, error) {
lru, err := freelru.New[string, void](size, stringHashFn)
if err != nil {
return nil, err
}
lru.SetLifetime(rollover)

syncMu := xsync.NewRWMutex(lru)
return &LRUSet{syncMu: &syncMu}, nil
}

type lockedLRUSet struct {
lru *freelru.LRU[string, void]
}

func (l lockedLRUSet) CheckAndAdd(entry string) (excluded bool) {
if _, exclude := (l.lru).Get(entry); exclude {
return true
}
(l.lru).Add(entry, void{})
return false
}

type nilLockedLRUSet struct{}

func (l nilLockedLRUSet) CheckAndAdd(string) bool {
return false
}
102 changes: 102 additions & 0 deletions exporter/elasticsearchexporter/internal/lru/lruset_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
// Copyright The OpenTelemetry Authors
// SPDX-License-Identifier: Apache-2.0

package lru

import (
"testing"
"time"

"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)

func TestLRUSet(t *testing.T) {
cache, err := NewLRUSet(5, time.Minute)
require.NoError(t, err)

err = cache.WithLock(func(lock LockedLRUSet) error {
assert.False(t, lock.CheckAndAdd("a"))
assert.True(t, lock.CheckAndAdd("a"))
assert.False(t, lock.CheckAndAdd("b"))

assert.InDelta(t, 0.0, testing.AllocsPerRun(5, func() {
_ = lock.CheckAndAdd("c")
}), 0)

return nil
})

assert.NoError(t, err)
}

func TestLRUSetLifeTime(t *testing.T) {
const lifetime = 100 * time.Millisecond
cache, err := NewLRUSet(5, lifetime)
require.NoError(t, err)

err = cache.WithLock(func(lock LockedLRUSet) error {
assert.False(t, lock.CheckAndAdd("a"))
assert.True(t, lock.CheckAndAdd("a"))
return nil
})
require.NoError(t, err)

// Wait until cache item is expired.
time.Sleep(lifetime)
err = cache.WithLock(func(lock LockedLRUSet) error {
assert.False(t, lock.CheckAndAdd("a"))
assert.True(t, lock.CheckAndAdd("a"))
return nil
})
require.NoError(t, err)

// Wait 50% of the lifetime, so the item is not expired.
time.Sleep(lifetime / 2)
err = cache.WithLock(func(lock LockedLRUSet) error {
assert.True(t, lock.CheckAndAdd("a"))
return nil
})
require.NoError(t, err)

// Wait another 50% of the lifetime, so the item should be expired.
time.Sleep(lifetime / 2)
err = cache.WithLock(func(lock LockedLRUSet) error {
assert.False(t, lock.CheckAndAdd("a"))
return nil
})
require.NoError(t, err)
}

func TestNilLRUSet(t *testing.T) {
cache := &LRUSet{}

err := cache.WithLock(func(lock LockedLRUSet) error {
assert.False(t, lock.CheckAndAdd("a"))
assert.False(t, lock.CheckAndAdd("a"))
assert.False(t, lock.CheckAndAdd("b"))

assert.InDelta(t, 0.0, testing.AllocsPerRun(5, func() {
_ = lock.CheckAndAdd("c")
}), 0)

return nil
})

assert.NoError(t, err)
}

func BenchmarkLRUSetCheck(b *testing.B) {
cache, err := NewLRUSet(5, time.Minute)
require.NoError(b, err)

_ = cache.WithLock(func(lock LockedLRUSet) error {
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
lock.CheckAndAdd("a")
}

return nil
})
}
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (
"testing"

"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/collector/pdata/pcommon"
"go.opentelemetry.io/collector/pdata/plog"

Expand Down Expand Up @@ -185,8 +186,9 @@ func TestSerializeLog(t *testing.T) {
logs.MarkReadOnly()

var buf bytes.Buffer
ser := New()
err := ser.SerializeLog(resourceLogs.Resource(), "", scopeLogs.Scope(), "", record, elasticsearch.Index{}, &buf)
ser, err := New()
require.NoError(t, err)
err = ser.SerializeLog(resourceLogs.Resource(), "", scopeLogs.Scope(), "", record, elasticsearch.Index{}, &buf)
if (err != nil) != tt.wantErr {
t.Errorf("Log() error = %v, wantErr %v", err, tt.wantErr)
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import (
"testing"

"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"go.opentelemetry.io/collector/pdata/pmetric"

"github.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticsearchexporter/internal/datapoints"
Expand All @@ -33,8 +34,9 @@ func TestSerializeMetricsConflict(t *testing.T) {

var validationErrors []error
var buf bytes.Buffer
ser := New()
_, err := ser.SerializeMetrics(resourceMetrics.Resource(), "", scopeMetrics.Scope(), "", dataPoints, &validationErrors, elasticsearch.Index{}, &buf)
ser, err := New()
require.NoError(t, err)
_, err = ser.SerializeMetrics(resourceMetrics.Resource(), "", scopeMetrics.Scope(), "", dataPoints, &validationErrors, elasticsearch.Index{}, &buf)
if err != nil {
t.Errorf("Metrics() error = %v", err)
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,34 @@

package otelserializer // import "github.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticsearchexporter/internal/serializer/otelserializer"

type Serializer struct{}
import (
"sync"
"time"

"github.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticsearchexporter/internal/lru"
)

const (
knownExecutablesCacheSize = 16 * 1024
knownFramesCacheSize = 128 * 1024
knownTracesCacheSize = 128 * 1024
knownUnsymbolizedFramesCacheSize = 128 * 1024
knownUnsymbolizedExecutablesCacheSize = 16 * 1024

minILMRolloverTime = 3 * time.Hour
)

type Serializer struct {
// Data cache for profiles
loadLRUsOnce sync.Once
knownTraces *lru.LRUSet
knownFrames *lru.LRUSet
knownExecutables *lru.LRUSet
knownUnsymbolizedFrames *lru.LRUSet
knownUnsymbolizedExecutables *lru.LRUSet
}

// New builds a new Serializer
func New() *Serializer {
return &Serializer{}
func New() (*Serializer, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this New is shared by all signal types, but is called once per signal type IIUC. Given freelru.New calls make in NewWithSize, doesn't it allocate memory for no value for each non-profiles ES exporter?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, unless we start having per-signal exporter structs, we don't really have a solution here.
This additional memory is going to very quite small, since the LRU is going to be empty. It's also not going to grow.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're talking about for each LRUSet

	buckets := make([]uint32, size)
	elements := make([]element[K, V], size)

for size=128*1024=131072.

Let's say each element is 28B, it will be
5*131072*(4+28)=20MB for each signal. Is the math right?

Copy link
Contributor

@rockdaboot rockdaboot Mar 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's ~ correct. With the suggestion to reduce the size for two of the 5, it would be
32 * (128 * 1024 * 3 + 16 * 1024 * 2) ~= 13.5MB

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

go bench is not the correct tool to measure the memory usage, as it gets averaged out over b.N. I compiled contrib with and without this PR, and i use this config:

exporters:
  elasticsearch:
    endpoint: http://localhost:9200
  debug:
    verbosity: detailed
processors:
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
service:
  pipelines:
    logs:
      exporters:
        - elasticsearch
      processors:
      receivers:
        - otlp
    metrics:
      exporters:
        - elasticsearch
      processors:
      receivers:
        - otlp
    traces:
      exporters:
        - elasticsearch
      processors:
      receivers:
        - otlp
  telemetry:
    logs:
      encoding: json
      level: info
    metrics:
      address: 0.0.0.0:8888
      level: detailed

before:

# HELP otelcol_process_cpu_seconds Total CPU user and system time in seconds [alpha]
# TYPE otelcol_process_cpu_seconds counter
otelcol_process_cpu_seconds{service_instance_id="0f075fca-13ae-4191-9f97-590ae6129e57",service_name="otelcontribcol",service_version="0.123.0-dev"} 0.16
# HELP otelcol_process_memory_rss Total physical memory (resident set size) [alpha]
# TYPE otelcol_process_memory_rss gauge
otelcol_process_memory_rss{service_instance_id="0f075fca-13ae-4191-9f97-590ae6129e57",service_name="otelcontribcol",service_version="0.123.0-dev"} 1.7170432e+08
# HELP otelcol_process_runtime_heap_alloc_bytes Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc') [alpha]
# TYPE otelcol_process_runtime_heap_alloc_bytes gauge
otelcol_process_runtime_heap_alloc_bytes{service_instance_id="0f075fca-13ae-4191-9f97-590ae6129e57",service_name="otelcontribcol",service_version="0.123.0-dev"} 2.2451488e+07
# HELP otelcol_process_runtime_total_alloc_bytes Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc') [alpha]
# TYPE otelcol_process_runtime_total_alloc_bytes counter
otelcol_process_runtime_total_alloc_bytes{service_instance_id="0f075fca-13ae-4191-9f97-590ae6129e57",service_name="otelcontribcol",service_version="0.123.0-dev"} 3.2760136e+07
# HELP otelcol_process_runtime_total_sys_memory_bytes Total bytes of memory obtained from the OS (see 'go doc runtime.MemStats.Sys') [alpha]
# TYPE otelcol_process_runtime_total_sys_memory_bytes gauge
otelcol_process_runtime_total_sys_memory_bytes{service_instance_id="0f075fca-13ae-4191-9f97-590ae6129e57",service_name="otelcontribcol",service_version="0.123.0-dev"} 4.6224648e+07
# HELP otelcol_process_uptime Uptime of the process [alpha]
# TYPE otelcol_process_uptime counter
otelcol_process_uptime{service_instance_id="0f075fca-13ae-4191-9f97-590ae6129e57",service_name="otelcontribcol",service_version="0.123.0-dev"} 2.817375177
# HELP promhttp_metric_handler_errors_total Total number of internal errors encountered by the promhttp metric handler.
# TYPE promhttp_metric_handler_errors_total counter
promhttp_metric_handler_errors_total{cause="encoding"} 0
promhttp_metric_handler_errors_total{cause="gathering"} 0
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_instance_id="0f075fca-13ae-4191-9f97-590ae6129e57",service_name="otelcontribcol",service_version="0.123.0-dev"} 1

after:

# HELP otelcol_process_cpu_seconds Total CPU user and system time in seconds [alpha]
# TYPE otelcol_process_cpu_seconds counter
otelcol_process_cpu_seconds{service_instance_id="26c2a224-1b22-4b2f-b0e8-5a051ac7a895",service_name="otelcontribcol",service_version="0.123.0-dev"} 0.19
# HELP otelcol_process_memory_rss Total physical memory (resident set size) [alpha]
# TYPE otelcol_process_memory_rss gauge
otelcol_process_memory_rss{service_instance_id="26c2a224-1b22-4b2f-b0e8-5a051ac7a895",service_name="otelcontribcol",service_version="0.123.0-dev"} 2.36716032e+08
# HELP otelcol_process_runtime_heap_alloc_bytes Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc') [alpha]
# TYPE otelcol_process_runtime_heap_alloc_bytes gauge
otelcol_process_runtime_heap_alloc_bytes{service_instance_id="26c2a224-1b22-4b2f-b0e8-5a051ac7a895",service_name="otelcontribcol",service_version="0.123.0-dev"} 8.2799552e+07
# HELP otelcol_process_runtime_total_alloc_bytes Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc') [alpha]
# TYPE otelcol_process_runtime_total_alloc_bytes counter
otelcol_process_runtime_total_alloc_bytes{service_instance_id="26c2a224-1b22-4b2f-b0e8-5a051ac7a895",service_name="otelcontribcol",service_version="0.123.0-dev"} 9.9257384e+07
# HELP otelcol_process_runtime_total_sys_memory_bytes Total bytes of memory obtained from the OS (see 'go doc runtime.MemStats.Sys') [alpha]
# TYPE otelcol_process_runtime_total_sys_memory_bytes gauge
otelcol_process_runtime_total_sys_memory_bytes{service_instance_id="26c2a224-1b22-4b2f-b0e8-5a051ac7a895",service_name="otelcontribcol",service_version="0.123.0-dev"} 1.05537816e+08
# HELP otelcol_process_uptime Uptime of the process [alpha]
# TYPE otelcol_process_uptime counter
otelcol_process_uptime{service_instance_id="26c2a224-1b22-4b2f-b0e8-5a051ac7a895",service_name="otelcontribcol",service_version="0.123.0-dev"} 10.815659494
# HELP promhttp_metric_handler_errors_total Total number of internal errors encountered by the promhttp metric handler.
# TYPE promhttp_metric_handler_errors_total counter
promhttp_metric_handler_errors_total{cause="encoding"} 0
promhttp_metric_handler_errors_total{cause="gathering"} 0
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_instance_id="26c2a224-1b22-4b2f-b0e8-5a051ac7a895",service_name="otelcontribcol",service_version="0.123.0-dev"} 1

otelcol_process_memory_rss increased from 164MiB to 226MiB.

To confirm it is related to the number of pipelines, I added 2 more pipelines with different es exporter configs:

    traces/nop:
      exporters:
        - elasticsearch/two
      processors:
      receivers:
        - nop
    traces/noptwo:
      exporters:
        - elasticsearch/three
      processors:
      receivers:
        - nop

This time the rss increases to 269MiB. This pretty much matches up with our math above. This is a significant resource increase and needs to be resolved before moving forward with this PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have changed the implementation to only create the LRUs when they are needed.

return &Serializer{}, nil
}
Loading