This package proposes different API flavors. Every example is available on GitHub.
The Node.js stream API is scalable and offers the greatest control over the data flow. It comes at the cost of being more verbose and harder to write. Data is consumed inside the
readable event with the
stream.read function. It is then written by calling the
stream.write function. The stream example illustrates how to initialize each packages and how to plug them.
Piping in Node.js is part of the stream API and behave just like Unix pipes where the output of a process, here a function, is redirected as the input of the following process. A pipe example is provided with an unconventional syntax:
Also available in the
csv module is the callback API. The all dataset is available in the second callback argument. Thus it will not scale with large dataset. The callback example initialize each CSV function sequentially, with the output of the previous one. Note, for the sake of clarity, the example doesn't deal with error management. It is enough spaghetti code.
The sync API behave like pure functions. For a given input, it always produce the same output.
Because of its simplicity, this is the recommended approach if you don't need scalability and if your dataset fit in memory.
The module to import is
csv/sync. The sync example illustrate its usage.