Credit @mcharytoniuk.
While importing over 250k records ImportModel kept running into various problems. One of them was too big memory usage - `ImportModel` loaded the complete file upfront (`$reader->fetchAll()`). Simple one-line change to `$reader->fetch()` makes `ImportModel` import CSV file row-by-row and returning an iterator which limits memory usage and allows data to be imported. This change optimizes memory usage and allows much simpler importing of larger files.
This issue has been addressed too early in the process lifecycle. The line number should be incremented later, say in the view layer when the error is displayed. A simple helper to determine what the index increment should do well.
$row + $indexIncrement
This is so we don't have to rebuild the array, which is computationally expensive for large datasets and could be frustrating for a developer expecting untouched index values, or likewise needing to prepare our special format.
This fixes an issue where the row number reported by import error logs would be off by 1 or 2 depending on whether the first row was labelled as titles or not.
As arrays start at 0 in PHP, `$firstRowTitles = false` would result in reported row numbers being off by one less than their actual number. If `$firstRowTitles = true`, the reported row number would be off by two less than their actual number (one for the zero index, one for the first row not existing in the `$results` set).
In contrast to the preview CSV reader, the actual ``ImportModel`` did not respect any encoding values provided. This leads to bugs with any non utf8-compliant characters. This PR fixes the problem by adding the appropriate encoding filter (copied from the preview reader).