Skip to content

Commit a1e1a1f

Browse files
Correct some spelling and grammar issues
Improve documentation readability. Signed-off-by: Brad Solomon <81818815+brsolomon-deloitte@users.noreply.github.com>
1 parent 9119dfa commit a1e1a1f

14 files changed

+32
-35
lines changed

container-deployment/kubernetes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Since applications runs in Pods and multiple Pods might exists across multiple n
3535

3636
For [Kubernetes](https://kubernetes.io), a [DaemonSet](https://kubernetes.io/docs/admin/daemons/) ensures that all \(or some\) nodes run a copy of a _pod_. To solve log collection, we are going to implement a Fluentd DaemonSet.
3737

38-
Fluentd is flexible enough and have the proper plugins to distribute logs to different third-party applications like databases or cloud services, so the principal question is to know: _Where the logs will be stored?_. Once we got that question answered, we can move forward configuring our DaemonSet.
38+
Fluentd is flexible enough and has the proper plugins to distribute logs to different third-party applications like databases or cloud services, so the principal question is _Where will the logs be stored?_. Once we got that question answered, we can move forward configuring our DaemonSet.
3939

4040
The following steps will focus on sending the logs to an Elasticsearch Pod:
4141

deployment/high-availability.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,9 @@ In such systems, several delivery guarantees are possible:
2626

2727
is the most desirable.
2828

29-
If the system "**can't lose a single event**", and must also transfer "**exactly once**", then the system must stop ingesting events when it runs out of write capacity. The proper approach would be to use synchronous logging and return errors when the event cannot be accepted.
29+
If the system "**can't lose a single event**" and must also transfer "**exactly once**", then the system must stop ingesting events when it runs out of write capacity. The proper approach would be to use synchronous logging and return errors when the event cannot be accepted.
3030

31-
That's why Fluentd provides '**at most once**' and '**at least once**' transfers. To collect massive amounts of data without impacting application performance, a data logger must transfer data asynchronously. This improves performance at the cost of potential delivery failures.
31+
That's why Fluentd provides "**at most once**" and "**at least once**" transfers. To collect massive amounts of data without impacting application performance, a data logger must transfer data asynchronously. This improves performance at the cost of potential delivery failures.
3232

3333
However, most failure scenarios are preventable. The following sections describe how to set up Fluentd's topology for high-availability.
3434

@@ -38,9 +38,9 @@ To configure Fluentd for high-availability, we assume that your network consists
3838

3939
![Fluentd&apos;s High-Availability Overview](../.gitbook/assets/fluentd_ha%20%281%29%20%281%29%20%281%29.png)
4040

41-
'**log forwarders**' are typically installed on every node to receive local events. Once an event is received, they forward it to the 'log aggregators' through the network. For log forwarders, [fluent-bit](https://fluentbit.io/) is also good candidate for light-weight processing.
41+
'**Log forwarders**' are typically installed on every node to receive local events. Once an event is received, they forward it to the 'log aggregators' through the network. For log forwarders, [fluent-bit](https://fluentbit.io/) is also good candidate for light-weight processing.
4242

43-
'**log aggregators**' are daemons that continuously receive events from the log forwarders. They buffer the events and periodically upload the data into the cloud.
43+
'**Log aggregators**' are daemons that continuously receive events from the log forwarders. They buffer the events and periodically upload the data into the cloud.
4444

4545
Fluentd can act as either a log forwarder or a log aggregator, depending on its configuration. The next sections describe the respective setups. We assume that the active log aggregator has an IP **192.168.0.1** and the backup has IP **192.168.0.2**.
4646

@@ -112,7 +112,7 @@ The incoming logs are buffered, then periodically uploaded to the cloud. If the
112112

113113
When a log forwarder receives events from applications, the events are first written into a disk buffer \(specified by `<buffer>`'s `path`\). After every `flush_interval`, the buffered data is forwarded to aggregators.
114114

115-
This process is inherently robust against data loss. If a log forwarder's fluentd process dies then on its restart the buffered data is properly transferred to its aggregator. If the network between forwarders and aggregators breaks, the data transfer is automatically retried.
115+
This process is inherently robust against data loss. If a log forwarder's fluentd process dies, then on its restart, the buffered data is properly transferred to its aggregator. If the network between forwarders and aggregators breaks, the data transfer is automatically retried.
116116

117117
However, possible message loss scenarios do exist:
118118

@@ -126,7 +126,7 @@ However, possible message loss scenarios do exist:
126126

127127
When log aggregators receive events from log forwarders, the events are first written into a disk buffer \(specified by `<buffer>`'s `path`\). After every `flush_interval`, the buffered data is uploaded to the cloud.
128128

129-
This process is inherently robust against data loss. If a log aggregator's fluentd process dies then on its restart the data from the log forwarder is properly retransferred. If the network between aggregators and the cloud breaks, the data transfer is automatically retried.
129+
This process is inherently robust against data loss. If a log aggregator's fluentd process dies, then on its restart, the data from the log forwarder is properly retransferred. If the network between aggregators and the cloud breaks, the data transfer is automatically retried.
130130

131131
However, possible message loss scenarios do exist:
132132

deployment/multi-process-workers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This feature can simply replace `fluent-plugin-multiprocess`.
88

99
By default, one instance of `fluentd` launches a supervisor and a worker. A worker consists of input/filter/output plugins.
1010

11-
Multi-process workers feature launches multiple workers and use a separate process per worker. `fluentd` provides several features for multi-process workers, so you can get multi-process merits.
11+
The **multi-process workers** feature launches multiple workers and use a separate process per worker. `fluentd` provides several features for multi-process workers.
1212

1313
![Multi-process Workers](../.gitbook/assets/multi-process-workers%20%281%29%20%281%29.png)
1414

deployment/performance-tuning-single-process.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -71,11 +71,8 @@ Ruby has several GC parameters to tune GC performance and you can configure thes
7171

7272
Here's a quote from the documentation:
7373

74-
```text
75-
Do full GC when the number of old objects is more than R * N
76-
where R is this factor and
77-
N is the number of old objects just after last full GC.
78-
```
74+
> Do full GC when the number of old objects is more than R * N
75+
> where R is this factor and N is the number of old objects just after last full GC.
7976
8077
So, the default GC behavior does not call full GC until the number of old objects reaches `2.0 * before old objects`. This improves the throughput but it grows the total memory usage. This setting is not good for the low resource environment e.g. a small container. For such cases, try `RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=0.9` or `RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.2`.
8178

deployment/rpc.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
HTTP RPC enables you to manage your Fluentd instance through HTTP endpoints. You can use this feature as a replacement of [Unix signals](signals.md).
44

5-
It is especially useful for environments where signals are not supported well e.g. Windows. This requires Fluentd to start not with --no-supervisor command-line option.
5+
It is especially useful for environments where signals are not supported well e.g. Windows. This requires Fluentd to start not with `--no-supervisor` command-line option.
66

77
## Configuration
88

deployment/system-config.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# System Configuration
22

3-
This article describes the Fluentd's system configurations for `<system>` section and command-line options.
3+
This article describes Fluentd's system configurations for the `<system>` section and command-line options.
44

55
## Overview
66

deployment/trouble-shooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ You sometimes hit unexpected shutdown with non-zero exit status like this:
8787
2016-01-01 00:00:02 +0800 [info]: process finished code=6
8888
```
8989

90-
If the problem happens inside Ruby e.g. segmentation fault, C extension bug, etc., you cannot get the complete log when `fluentd` process is daemonized. For example, `td-agent` launches fluentd with `--daemon` option. In `td-agent` case, you can get the complete log with following command to simulate `/etc/init.d/td-agent start` without daemonize:
90+
If the problem happens inside Ruby e.g. segmentation fault, C extension bug, etc., you cannot get the complete log when `fluentd` process is daemonized. For example, `td-agent` launches fluentd with `--daemon` option. In `td-agent` case, you can get the complete log with following command to simulate `/etc/init.d/td-agent start` without daemonizing (run in the foreground):
9191

9292
```text
9393
$ sudo LD_PRELOAD=/opt/td-agent/embedded/lib/libjemalloc.so /usr/sbin/td-agent -c /etc/td-agent/td-agent.conf --user td-agent --group td-agent

how-to-guides/free-alternative-to-splunk-by-fluentd.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
[Elasticsearch](https://www.elastic.co/products/elasticsearch) is an open-source search engine well-known for its ease of use. [Kibana](https://www.elastic.co/products/kibana) is an open-source Web UI that makes Elasticsearch user friendly for marketers, engineers and data scientists alike.
88

9-
By combining these three tools \(Fluentd + Elasticsearch + Kibana\) we get a scalable, flexible, easy to use the log search engine with a great Web UI that provides an open-source Splunk alternative, all for free.
9+
By combining these three tools \(Fluentd + Elasticsearch + Kibana\) we get a scalable, flexible, easy to use log search engine with a great Web UI that provides an open-source Splunk alternative, all for free.
1010

1111
![Fluentd + Elasticsearch + Kibana](../.gitbook/assets/fluentd-elasticsearch-kibana.png)
1212

@@ -45,7 +45,7 @@ Note: You can also install Elasticsearch \(and Kibana\) using RPM/DEB packages.
4545

4646
## Set Up Kibana
4747

48-
To install Kibana, download it from the official website and extract it. Kibana is an HTML/CSS/JavaScript application \([download](https://www.elastic.co/downloads/kibana)\). Use the binary for 64-bit Linux systems. For this article, we download the binary for macOS.
48+
To install Kibana, download it from the official website and extract it. Kibana is an HTML/CSS/JavaScript application \([download](https://www.elastic.co/downloads/kibana)\). Use the binary for 64-bit Linux systems.
4949

5050
```text
5151
$ curl -O https://artifacts.elastic.co/downloads/kibana/kibana-6.1.0-linux-x86_64.tar.gz

how-to-guides/syslog-influxdb.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ Once it is installed, you can run it with:
5555
$ sudo systemctl start chronograf
5656
```
5757

58-
Then, go to localhost:8888 \(or wherever you are hosting Chronograf\) to access Chronograf's web console which is the successor of InfluxDB's web console.
58+
Then, go to localhost:8888 \(or wherever you are hosting Chronograf\) to access Chronograf's web console which is the successor to InfluxDB's web console.
5959

6060
Create a database called `test`. This is where we will be storing `syslog` data:
6161

installation/before-install.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Follow these recommendations:
1212

1313
It is highly recommended that you set up an NTP daemon \(e.g. [`chrony`](https://chrony.tuxfamily.org/), `ntpd`, etc.\) on the node to have an accurate current timestamp. This is crucial for all the production-grade logging services.
1414

15-
For Amazon Web Services users, we recommend using [AWS-hosted NTP server](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html).
15+
For Amazon Web Services users, we recommend using the [AWS-hosted NTP server](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html).
1616

1717
## Increase the Maximum Number of File Descriptors
1818

@@ -68,9 +68,9 @@ fs.protected_hardlinks = 1
6868
fs.protected_symlinks = 1
6969
```
7070

71-
This settings are almost enough for time-of-check to time-of-use \(TOCTOU, TOCTTOU or TOC/TOU\) which is a class of software bugs.
71+
This settings are almost enough for time-of-check to time-of-use (TOCTOU, TOCTTOU or TOC/TOU) which are class of software bugs.
7272

73-
If you turned off these protection, please turn on them.
73+
If you turned off these protections, please turn on them.
7474

7575
Use `sysctl -p` command or reboot your node for the changes to take effect.
7676

0 commit comments

Comments
 (0)