-
Notifications
You must be signed in to change notification settings - Fork 0
04 Buffers
The Buffer class is part of the Node.js API to make it possible to manipulate or interact with streams of binary data.
- Binary is simply a set or a collection of 1s and 0s.
-
10,01,001,1110,00101011
-
- each number in a binary, each
1and0in a set are called a Bit (short form of Binary digIT) - to store or represent a piece of data, a computer needds to convert that data to its binary representation
- integer binary representation:
12=>1100 - string binary representation: string => Character Code or Code Point => binary
- integer binary representation:
- Character sets are already defined rules of what exact number represents each character
- There are different definitions of these rules. The very popular ones include Unicode and ASCII
- JavaScript plays really well with Unicode Character Sets
Just as there are rules that define what number should represent a character, there are also rules that define how numbers should be represented in binaries. Specifically, how many bits to use to represent the number. This is called Character Encoding.
One of the definitions for Character Encoding
-
states that characters should be encoded in bytes.
- A byte is a set of eight bits — eight
1's and0's.- Can be used to represent the Code Point of any character in binary.
- A byte is a set of eight bits — eight
-
binary representation of
12is1100. So when UTF-8 states that 12 should be in eight bits, UTF-8 is saying that a computer needs to add more bits to the left side of the actual base-2 representation of the number 12 to make it a byte. So12should be stored as00001100.
Similar to how computers have specified rules on storing strings or characters in binaries. They also have specified rules on how images and videos should be converted or encoded and stored in binaries. The point here is that computers store all data types in binaries, and this is known as Binary Data.
A stream in Node.js is a sequence of data being moved from one point to the other over time.
Concept:
- you have a huge amount of data to process, but you don't need to wait for all the data to be available before you start processing it.
- this big data is broken down and sent in chunks.
Typically, the movement of data is usually with the intention to process it, or read it, and make decisions based on it. But there is a minimum and a maximum amount of data a process could take over time.
- if the rate the data arrives is faster than the rate the process consumes the data, the excess data need to wait somewhere for its turn to be processed
- if the process is consuming the data faster than it arrives, the few data that arrive earlier needs to wait for a certain amount of data to arrive before being sent out for processing
- the "waiting area" is the buffer!
It is even possible to create your own buffer! Aside from the one Node.js will automatically create during a stream, it is possible to create and manipulate your own buffer.
Let's create one!
There are different ways to create a buffer.
// Create an empty buffer of size 10.
// A buffer that only can accommodate 10 bytes.
const buf1 = Buffer.alloc(10);
// Create a buffer with content
const buf2 = Buffer.from('hello buffer');Once a buffer has been created, can start interacting with it.
// Examine the structure of a buffer
buf1.toJSON()
// { type: 'Buffer', data: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
// an empty buffer
buf2.toJSON()
/*
{
type: 'Buffer',
data: [
104, 101, 108, 108, 111, 32, 98, 117, 102, 102, 101, 144
]
}
*/
// the toJSON() method presents the data as the Unicode Code Points of the characters
// Examine the size of a buffer
buf1.length // 10
buf2.length // 12, auto-assigned based on the initial content when created.
// Write to a buffer
buf1.write('Buffer really rocks!');
// Decode a buffer
buf1.toString() // 'Buffer rea'
// oops, because buf1 is created to contain only 10 bytes, it couldn't accommodate the rest of the characters