Replies: 1 comment 1 reply
-
|
There are more reasons to handle a line at a time actually. And that is because we build up data per line then flush from memory to underlying stream. Adding that logic, makes one "giant" function start to get a little unwieldy. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
So I noticed you have some CSV stuff. And I know, I need to finish it. So telling you about it is pointless.
But I'd just like to point out, that in some ways I did get ahead of your current work here, and for, reasons, I believe it is useful to break the work into a function to handle a line at a time, and loop over that. I realize there is the 64 byte chunk thing.
And that 64byte chunks may or may not be on one line. I'm not denying that approach, nor have I yet implemented.
But I believe that cache/context should carry across per-line calls.
The reason to return from a function from end of line, and loop, has to do with, something about cleanly handling end of field vs. end of line vs. end of file. Because newlines can be any of those. And because you can have empty lines, with zero fields. It worked out reasonably elegantly in my work so far. I realize I am not done either so maybe there's more to discover or I'm wrong, but I'm probably right in this case (not the point, I'm just trying to ascribe confidence-so-far).
Beta Was this translation helpful? Give feedback.
All reactions