Hi All,
I am optimising some code and need to be able to slice large arrays and am hoping that there is a more efficient way.
The arrays are 40k lines plus.
I currently loop through three workbooks, each with 40k lines and, I put each range directly into a variant array, then into an array.
I then loop through the array to access each sub array and convert each sub array to 1d so I can use the filter function. I have tried using INDEX and tried transposing the data from each workbook, both error, which I'm assuming is due to the size of the data.
Currently I'm using the below (which is an excerpt):
To give a bit more background, the files are TabDelimited and I need to find given words (60+) that may exist on any row, and in any of a number of fields on each row.![]()
Sub getData()Const fPath As String = "D:\My documents\xyz\" Dim myfile As String Dim tempArr() As String Dim x As Integer Dim i As Long Dim rs As Variant Dim rs(1 To 3) As Variant myfile = Dir(fPath & "*.csv") x = 1 Do Until myfile = "" Set wb = Workbooks.Open(fPath & myfile, False, True) rs = wb.Sheets(1).Columns(1).SpecialCells(2).Value rscoll(x) = rs x = x + 1 wb.Close False myfile = Dir() Loop For x = 1 To 3 ReDim tempArr(1 To UBound(rscoll(x))) For i = 1 To UBound(rscoll(x)) tempArr(i) = UCase(rscoll(x)(i, 1)) Next i ' Find data in array using Filter() ' Do some other stuff Next x End Sub
I've also tried Querying the workbooks with ADO and filtering the recordsets, but this was unbelievably slow.
Any ideas as to whether this can be sped up at all? / Better way of attacking the problem?
Cheers
Kyle
Bookmarks