You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: container-deployment/kubernetes.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Since applications runs in Pods and multiple Pods might exists across multiple n
35
35
36
36
For [Kubernetes](https://kubernetes.io), a [DaemonSet](https://kubernetes.io/docs/admin/daemons/) ensures that all \(or some\) nodes run a copy of a _pod_. To solve log collection, we are going to implement a Fluentd DaemonSet.
37
37
38
-
Fluentd is flexible enough and have the proper plugins to distribute logs to different third-party applications like databases or cloud services, so the principal question is to know: _Where the logs will be stored?_. Once we got that question answered, we can move forward configuring our DaemonSet.
38
+
Fluentd is flexible enough and has the proper plugins to distribute logs to different third-party applications like databases or cloud services, so the principal question is _Where will the logs be stored?_. Once we got that question answered, we can move forward configuring our DaemonSet.
39
39
40
40
The following steps will focus on sending the logs to an Elasticsearch Pod:
Copy file name to clipboardExpand all lines: deployment/high-availability.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,9 +26,9 @@ In such systems, several delivery guarantees are possible:
26
26
27
27
is the most desirable.
28
28
29
-
If the system "**can't lose a single event**", and must also transfer "**exactly once**", then the system must stop ingesting events when it runs out of write capacity. The proper approach would be to use synchronous logging and return errors when the event cannot be accepted.
29
+
If the system "**can't lose a single event**" and must also transfer "**exactly once**", then the system must stop ingesting events when it runs out of write capacity. The proper approach would be to use synchronous logging and return errors when the event cannot be accepted.
30
30
31
-
That's why Fluentd provides '**at most once**' and '**at least once**' transfers. To collect massive amounts of data without impacting application performance, a data logger must transfer data asynchronously. This improves performance at the cost of potential delivery failures.
31
+
That's why Fluentd provides "**at most once**" and "**at least once**" transfers. To collect massive amounts of data without impacting application performance, a data logger must transfer data asynchronously. This improves performance at the cost of potential delivery failures.
32
32
33
33
However, most failure scenarios are preventable. The following sections describe how to set up Fluentd's topology for high-availability.
34
34
@@ -38,9 +38,9 @@ To configure Fluentd for high-availability, we assume that your network consists
'**log forwarders**' are typically installed on every node to receive local events. Once an event is received, they forward it to the 'log aggregators' through the network. For log forwarders, [fluent-bit](https://fluentbit.io/) is also good candidate for light-weight processing.
41
+
'**Log forwarders**' are typically installed on every node to receive local events. Once an event is received, they forward it to the 'log aggregators' through the network. For log forwarders, [fluent-bit](https://fluentbit.io/) is also good candidate for light-weight processing.
42
42
43
-
'**log aggregators**' are daemons that continuously receive events from the log forwarders. They buffer the events and periodically upload the data into the cloud.
43
+
'**Log aggregators**' are daemons that continuously receive events from the log forwarders. They buffer the events and periodically upload the data into the cloud.
44
44
45
45
Fluentd can act as either a log forwarder or a log aggregator, depending on its configuration. The next sections describe the respective setups. We assume that the active log aggregator has an IP **192.168.0.1** and the backup has IP **192.168.0.2**.
46
46
@@ -112,7 +112,7 @@ The incoming logs are buffered, then periodically uploaded to the cloud. If the
112
112
113
113
When a log forwarder receives events from applications, the events are first written into a disk buffer \(specified by `<buffer>`'s `path`\). After every `flush_interval`, the buffered data is forwarded to aggregators.
114
114
115
-
This process is inherently robust against data loss. If a log forwarder's fluentd process dies then on its restart the buffered data is properly transferred to its aggregator. If the network between forwarders and aggregators breaks, the data transfer is automatically retried.
115
+
This process is inherently robust against data loss. If a log forwarder's fluentd process dies, then on its restart, the buffered data is properly transferred to its aggregator. If the network between forwarders and aggregators breaks, the data transfer is automatically retried.
116
116
117
117
However, possible message loss scenarios do exist:
118
118
@@ -126,7 +126,7 @@ However, possible message loss scenarios do exist:
126
126
127
127
When log aggregators receive events from log forwarders, the events are first written into a disk buffer \(specified by `<buffer>`'s `path`\). After every `flush_interval`, the buffered data is uploaded to the cloud.
128
128
129
-
This process is inherently robust against data loss. If a log aggregator's fluentd process dies then on its restart the data from the log forwarder is properly retransferred. If the network between aggregators and the cloud breaks, the data transfer is automatically retried.
129
+
This process is inherently robust against data loss. If a log aggregator's fluentd process dies, then on its restart, the data from the log forwarder is properly retransferred. If the network between aggregators and the cloud breaks, the data transfer is automatically retried.
130
130
131
131
However, possible message loss scenarios do exist:
Copy file name to clipboardExpand all lines: deployment/multi-process-workers.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This feature can simply replace `fluent-plugin-multiprocess`.
8
8
9
9
By default, one instance of `fluentd` launches a supervisor and a worker. A worker consists of input/filter/output plugins.
10
10
11
-
Multi-process workers feature launches multiple workers and use a separate process per worker. `fluentd` provides several features for multi-process workers, so you can get multi-process merits.
11
+
The **multi-process workers** feature launches multiple workers and use a separate process per worker. `fluentd` provides several features for multi-process workers.
Copy file name to clipboardExpand all lines: deployment/performance-tuning-single-process.md
+2-5Lines changed: 2 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,11 +71,8 @@ Ruby has several GC parameters to tune GC performance and you can configure thes
71
71
72
72
Here's a quote from the documentation:
73
73
74
-
```text
75
-
Do full GC when the number of old objects is more than R * N
76
-
where R is this factor and
77
-
N is the number of old objects just after last full GC.
78
-
```
74
+
> Do full GC when the number of old objects is more than R * N
75
+
> where R is this factor and N is the number of old objects just after last full GC.
79
76
80
77
So, the default GC behavior does not call full GC until the number of old objects reaches `2.0 * before old objects`. This improves the throughput but it grows the total memory usage. This setting is not good for the low resource environment e.g. a small container. For such cases, try `RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9` or `RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.2`.
Copy file name to clipboardExpand all lines: deployment/rpc.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
HTTP RPC enables you to manage your Fluentd instance through HTTP endpoints. You can use this feature as a replacement of [Unix signals](signals.md).
4
4
5
-
It is especially useful for environments where signals are not supported well e.g. Windows. This requires Fluentd to start not with --no-supervisor command-line option.
5
+
It is especially useful for environments where signals are not supported well e.g. Windows. This requires Fluentd to start not with `--no-supervisor` command-line option.
Copy file name to clipboardExpand all lines: deployment/trouble-shooting.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -87,7 +87,7 @@ You sometimes hit unexpected shutdown with non-zero exit status like this:
87
87
2016-01-01 00:00:02 +0800 [info]: process finished code=6
88
88
```
89
89
90
-
If the problem happens inside Ruby e.g. segmentation fault, C extension bug, etc., you cannot get the complete log when `fluentd` process is daemonized. For example, `td-agent` launches fluentd with `--daemon` option. In `td-agent` case, you can get the complete log with following command to simulate `/etc/init.d/td-agent start` without daemonize:
90
+
If the problem happens inside Ruby e.g. segmentation fault, C extension bug, etc., you cannot get the complete log when `fluentd` process is daemonized. For example, `td-agent` launches fluentd with `--daemon` option. In `td-agent` case, you can get the complete log with following command to simulate `/etc/init.d/td-agent start` without daemonizing (run in the foreground):
Copy file name to clipboardExpand all lines: how-to-guides/free-alternative-to-splunk-by-fluentd.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@
6
6
7
7
[Elasticsearch](https://www.elastic.co/products/elasticsearch) is an open-source search engine well-known for its ease of use. [Kibana](https://www.elastic.co/products/kibana) is an open-source Web UI that makes Elasticsearch user friendly for marketers, engineers and data scientists alike.
8
8
9
-
By combining these three tools \(Fluentd + Elasticsearch + Kibana\) we get a scalable, flexible, easy to use the log search engine with a great Web UI that provides an open-source Splunk alternative, all for free.
9
+
By combining these three tools \(Fluentd + Elasticsearch + Kibana\) we get a scalable, flexible, easy to use log search engine with a great Web UI that provides an open-source Splunk alternative, all for free.
@@ -45,7 +45,7 @@ Note: You can also install Elasticsearch \(and Kibana\) using RPM/DEB packages.
45
45
46
46
## Set Up Kibana
47
47
48
-
To install Kibana, download it from the official website and extract it. Kibana is an HTML/CSS/JavaScript application \([download](https://www.elastic.co/downloads/kibana)\). Use the binary for 64-bit Linux systems. For this article, we download the binary for macOS.
48
+
To install Kibana, download it from the official website and extract it. Kibana is an HTML/CSS/JavaScript application \([download](https://www.elastic.co/downloads/kibana)\). Use the binary for 64-bit Linux systems.
Copy file name to clipboardExpand all lines: how-to-guides/syslog-influxdb.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ Once it is installed, you can run it with:
55
55
$ sudo systemctl start chronograf
56
56
```
57
57
58
-
Then, go to localhost:8888 \(or wherever you are hosting Chronograf\) to access Chronograf's web console which is the successor of InfluxDB's web console.
58
+
Then, go to localhost:8888 \(or wherever you are hosting Chronograf\) to access Chronograf's web console which is the successor to InfluxDB's web console.
59
59
60
60
Create a database called `test`. This is where we will be storing `syslog` data:
Copy file name to clipboardExpand all lines: installation/before-install.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ Follow these recommendations:
12
12
13
13
It is highly recommended that you set up an NTP daemon \(e.g. [`chrony`](https://chrony.tuxfamily.org/), `ntpd`, etc.\) on the node to have an accurate current timestamp. This is crucial for all the production-grade logging services.
14
14
15
-
For Amazon Web Services users, we recommend using [AWS-hosted NTP server](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html).
15
+
For Amazon Web Services users, we recommend using the [AWS-hosted NTP server](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html).
16
16
17
17
## Increase the Maximum Number of File Descriptors
18
18
@@ -68,9 +68,9 @@ fs.protected_hardlinks = 1
68
68
fs.protected_symlinks = 1
69
69
```
70
70
71
-
This settings are almost enough for time-of-check to time-of-use \(TOCTOU, TOCTTOU or TOC/TOU\) which is a class of software bugs.
71
+
This settings are almost enough for time-of-check to time-of-use (TOCTOU, TOCTTOU or TOC/TOU) which are class of software bugs.
72
72
73
-
If you turned off these protection, please turn on them.
73
+
If you turned off these protections, please turn on them.
74
74
75
75
Use `sysctl -p` command or reboot your node for the changes to take effect.
0 commit comments