Not looking for code... just looking for some options.
I have about 30 workbooks to parse though that range in file size of 5mb to 50mb each. Each file has 3 worksheets (2 that I care about). Only one of the worksheets in each workbook has a lot of data (i.e. I know what to do with the smaller worksheets).
For each of the workbooks and the worksheet with all the data I have to split that data in to smaller workbooks. The end result will be something like 30 workbooks split into 100 workbooks.
The new files (100 files/workbooks) will be based on info in one of the columns in the data worksheet. Example (Say col A):
A; B; C; D<cont. to FFish>
f
f
m
f
g
f
m
g
<Repeated many times>
The example above would become 3 files (m.xlsx, f.xlsx, g.xlsx). Key point: The data across each row has tons of formatting that can't be lost (different for each row). Key point: the next workbook (1 of 30... 2 of 30...) may have f data that needs to be appended to a file I have already created (f.xlsx).
Currently I loop though each row and copy that row to the new files.
Would filtering then copying be the fastest?
Would it be faster to copy all the data then sort and find the last relevant row and delete the rest?
How many of the new workbooks can I leave open with the file sizes I’m talking about (32 bit Excel)? Should I just leave them all open until the 30 workbooks are parsed (can I?)?
Any other options?
Bookmarks