Skip to content

Commit 30fde55

Browse files
Milvus-doc-botMilvus-doc-bot
authored andcommitted
Release new docs to master
1 parent 9139f97 commit 30fde55

File tree

3 files changed

+71
-76
lines changed

3 files changed

+71
-76
lines changed
-893 KB
Binary file not shown.
-976 KB
Binary file not shown.

v2.6.x/site/en/adminGuide/monitor/configure_grafana_loki.md

Lines changed: 71 additions & 76 deletions
Original file line numberDiff line numberDiff line change
@@ -15,30 +15,7 @@ In this guide, you will learn how to:
1515
- Query logs using Grafana.
1616

1717
For reference, [Promtail](https://grafana.com/docs/loki/latest/send-data/promtail/#promtail-agent) will be deprecated.
18-
So we instead introduce Alloy, which has been officially suggested by Grafana Labs as the new agent.
19-
20-
# Introduction
21-
22-
Before diving into how to build a logging system with Milvus, we’d like to first introduce the mechanisms of the logging system being used.
23-
Broadly speaking, there are two main structures you can apply.
24-
Please note that the mechanism to be introduced can be applied regardless of whether the [log functionality](https://milvus.io/docs/configure_log.md) in Milvus is enabled.
25-
26-
## 1. Using host volumes of kubernetes worker node
27-
28-
kubernetes worker nodes periodically write stream logs generated from pods scheduled on those nodes to a specific path in the node’s file system as files with a `.log` extension, we will leverage this feature.
29-
Next, we will deploy Alloy, which acts as an agent, as a DaemonSet on the worker nodes.
30-
This Alloy will share the path where the log files are stored on the worker nodes via a host volume.
31-
As a result, the log files from the Milvus pods will be visible inside the Alloy pod, and Alloy will read these files and send them to Loki.
32-
33-
![Logging with k8s worker node host volume](../../../../assets/monitoring/logging_HostVolume.png "logging with host volume sharing.")
34-
35-
## 2. Using kubernetes API server
36-
37-
kubernetes API server is one of the control plane components. Alloy doesn't necessarily need to be deployed as a DaemonSet. It works well as a Deployment.
38-
Instead, Alloy must request to kubernetes API server for fetching stream logs of milvus pods and get them.
39-
Finally, Alloy will send the stream logs to Loki.
40-
41-
![Logging with k8s API Server](../../../../assets/monitoring/logging_K8sApi.png "logging with k8s api server.")
18+
So we introduce Alloy, which has been officially suggested by Grafana Labs as the new agent to collect Kubernetes logs and forward them to Loki.
4219

4320
## Prerequisites
4421

@@ -107,77 +84,95 @@ helm install --values loki.yaml loki grafana/loki -n loki
10784

10885
## Deploy Alloy
10986

110-
You can configure alloy and deploy alloy based on Helm chart. Refer to the official Alloy [documentation](https://grafana.com/docs/alloy/latest/set-up/install/) for more installation options.
111-
We will show you Alloy [configuration](https://grafana.com/docs/alloy/latest/configure/).
11287

113-
### Create Alloy Configuration
114-
#### 1. Using host volumes of kubernetes worker node
115-
`alloy.yaml`:
88+
We will show you Alloy [Configuration](https://grafana.com/docs/alloy/latest/configure/).
89+
90+
### 1. Create Alloy Configuration
91+
92+
We will use the following `alloy.yaml` to collect logs of all Kubernetes pods & send them to Loki via loki-gateway:
93+
11694
```yaml
11795
alloy:
11896
enableReporting: false
11997
resources: {}
12098
configMap:
12199
create: true
122100
content: |-
123-
loki.write "remote_loki" {
101+
loki.write "default" {
124102
endpoint {
125-
url = "http://loki-gateway/loki/api/v1/push"
103+
url = "http://loki-gateway/loki/api/v1/push"
126104
}
127105
}
128-
129-
loki.source.file "milvus_logs" {
130-
targets = local.file_match.milvus_log_files.targets
131-
forward_to = [loki.write.remote_loki.receiver]
106+
107+
discovery.kubernetes "pod" {
108+
role = "pod"
132109
}
133-
134-
local.file_match "milvus_log_files" {
135-
path_targets = [
136-
{"__path__" = "/your/worker/node/var/log/pods/milvus_milvus-*/**/*.log"},
137-
]
110+
111+
loki.source.kubernetes "pod_logs" {
112+
targets = discovery.relabel.pod_logs.output
113+
forward_to = [loki.write.default.receiver]
138114
}
139-
# mount to pods with host volume
140-
mounts:
141-
extra:
142-
- name: log-pods
143-
mountPath: /host/var/log/pods
144-
readOnly: true
145-
controller:
146-
type: 'daemonset'
147-
# make volume that use host volume in worker node
148-
volumes:
149-
extra:
150-
- name: log-pods
151-
hostPath:
152-
path: /var/log/pods
153-
```
154115
155-
#### 2. Using kubernetes API server
156-
`alloy.yaml`:
157-
```yaml
158-
alloy:
159-
enableReporting: false
160-
resources: {}
161-
configMap:
162-
create: true
163-
content: |-
164-
loki.write "remote_loki" {
165-
endpoint {
166-
url = "http://loki-gateway/loki/api/v1/push"
116+
// Rewrite the label set to make log query easier
117+
discovery.relabel "pod_logs" {
118+
targets = discovery.kubernetes.pod.targets
119+
rule {
120+
source_labels = ["__meta_kubernetes_namespace"]
121+
action = "replace"
122+
target_label = "namespace"
167123
}
168-
}
169124
170-
discovery.kubernetes "milvus_pod" {
171-
role = "pod"
172-
}
125+
// "pod" <- "__meta_kubernetes_pod_name"
126+
rule {
127+
source_labels = ["__meta_kubernetes_pod_name"]
128+
action = "replace"
129+
target_label = "pod"
130+
}
173131
174-
loki.source.kubernetes "milvus_pod_logs" {
175-
targets = discovery.kubernetes.milvus_pod.output
176-
forward_to = [loki.write.remote_loki.receiver]
132+
// "container" <- "__meta_kubernetes_pod_container_name"
133+
rule {
134+
source_labels = ["__meta_kubernetes_pod_container_name"]
135+
action = "replace"
136+
target_label = "container"
137+
}
138+
139+
// "app" <- "__meta_kubernetes_pod_label_app_kubernetes_io_name"
140+
rule {
141+
source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
142+
action = "replace"
143+
target_label = "app"
144+
}
145+
146+
// "job" <- "__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"
147+
rule {
148+
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
149+
action = "replace"
150+
target_label = "job"
151+
separator = "/"
152+
replacement = "$1"
153+
}
154+
155+
// L"__path__" <- "__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"
156+
rule {
157+
source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
158+
action = "replace"
159+
target_label = "__path__"
160+
separator = "/"
161+
replacement = "/var/log/pods/*$1/*.log"
162+
}
163+
164+
// "container_runtime" <- "__meta_kubernetes_pod_container_id"
165+
rule {
166+
source_labels = ["__meta_kubernetes_pod_container_id"]
167+
action = "replace"
168+
target_label = "container_runtime"
169+
regex = "^(\\S+):\\/\\/.+$"
170+
replacement = "$1"
171+
}
177172
}
178173
```
179174

180-
### Install Alloy
175+
### 2. Install Alloy
181176

182177
```shell
183178
helm install --values alloy.yaml alloy grafana/alloy -n loki
@@ -228,4 +223,4 @@ After adding Loki as a data source, query Milvus logs in Grafana:
228223
2. In the upper-left corner of the page, choose the loki data source.
229224
3. Use __Label browser__ to select labels and query logs.
230225

231-
![Query](../../../../assets/milvuslog.jpg "Query Milvus logs in Grafana.")
226+
![Query](../../../../assets/milvuslog.jpg "Query Milvus logs in Grafana.")

0 commit comments

Comments
 (0)