myHotTake

How Do on(‘data’) and read() Differ in Node.js Streams?

Hey there! If you enjoy this little story, feel free to like or share it if it resonates with you.


So, I’m an artist working on a massive mural. This mural is so large that I can’t take it in all at once, so I rely on my assistant to help me manage the workload. This is kind of like how streams work in JavaScript, where data flows and I need to process it efficiently.

My assistant has two ways of helping me: one is like the on('data') method, and the other is like the read() method.

When my assistant uses the on('data') approach, they are actively watching for each new section of the mural to be delivered to the studio. As soon as a new canvas arrives, my assistant immediately starts handing me pieces to work on. I don’t have to worry about when the next piece will come; I just keep painting what’s in front of me, trusting that my assistant will keep the flow going smoothly. This is a bit like event-driven data handling, where I’m continuously processing data as it arrives without having to manually request more.

On the other hand, when my assistant uses the read() approach, it’s more like a calm day in the studio where I decide when I’m ready to tackle the next section of the mural. If I’m feeling ready for more, I simply call out to my assistant, and they hand me the next piece. This gives me more control over the pace and timing, similar to how the read() method allows me to pull data when I’m prepared to handle it.

In both scenarios, the mural is getting painted, but the approach and control differ. Sometimes I prefer the steady, predictable flow of my assistant actively handing me pieces (on('data')), and other times I like the control of deciding when to pull in more work (read()).


The on('data') Method

In JavaScript, using the on('data') method is like setting up an event listener for when new data chunks arrive. Here’s a simple example:

const fs = require('fs');

// Create a readable stream
const readableStream = fs.createReadStream('example.txt');

// Listen for 'data' events
readableStream.on('data', (chunk) => {
  console.log('Received chunk:', chunk.toString());
});

In this example, the stream reads data from a file named example.txt. As data flows in, the on('data') event handler is triggered for each chunk, much like my assistant handing me each new section of the mural to paint.

The read() Method

With the read() method, I have more control over when I receive the data. Here’s how you might implement that:

const fs = require('fs');

// Create a readable stream
const readableStream = fs.createReadStream('example.txt');

// Listen for 'readable' event to indicate stream is ready to be read
readableStream.on('readable', () => {
  let chunk;
  // Use read() to manually pull data
  while (null !== (chunk = readableStream.read())) {
    console.log('Read chunk:', chunk.toString());
  }
});

In this scenario, the readable event tells me when the stream is ready, and I decide when to read data, similar to calling out to my assistant when I’m ready for the next part of the mural.

Key Takeaways

  • Event-driven vs. Pull-driven: on('data') is event-driven, automatically processing chunks as they arrive. This is great for continuous flows where you want to handle data as soon as it’s available.
  • Controlled Flow: read() offers more control, letting you decide when to handle new data, which can be useful in scenarios where you need to manage resources more carefully or process data in specific intervals.
  • Versatility of Streams: Both methods highlight the flexibility of streams in Node.js, allowing you to choose the approach that best fits your application’s needs.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *