"Image Analyst" wrote in message <kgmi5n$cml$1@newscl01ah.mathworks.com>...
> "Scott" wrote in message <kgloic$s1b$1@newscl01ah.mathworks.com>...
> > I'm reading in a large binary file ...
> > A = 7000x2 matrix of uint8
> > B = typecast(A,'int16')
> > Desired Result:
> > B = 7000x1 array of int16 where each int16 is derived from each row in A.
> =========================================
> Not sure what the problem was, but 7000 by 2 is actually very small. It's not like > you have tens of millions of numbers or anything. 14,000 elements is a pretty small > array by today's standards. Even a garden variety digital image is about 30 thousand > times bigger than that.
Haha! I agree! That was the header. The file size can cary from 5M to 2G.
It takes me about 6 minutes to read a 5M file right now - completely unacceptable. I was able to speed the header read from 30 sec to 7 ms by typecasting and byte swapping after I read all the data points in. I plan to use the same approach for the data points. The data points can be anything from a uint8 to a double and is determined by an uint16 ID that precedes the data point. From what I've been reading here, the typecasting takes the longest, so I plan to gather the data in arrays, then typecast, and byte swap if necessary. I'm hoping for a ~300M file to be processed in less than 10 seconds.
> "Scott" wrote in message <kgloic$s1b$1@newscl01ah.mathworks.com>...
> > I'm reading in a large binary file ...
> > A = 7000x2 matrix of uint8
> > B = typecast(A,'int16')
> > Desired Result:
> > B = 7000x1 array of int16 where each int16 is derived from each row in A.
> =========================================
> Not sure what the problem was, but 7000 by 2 is actually very small. It's not like > you have tens of millions of numbers or anything. 14,000 elements is a pretty small > array by today's standards. Even a garden variety digital image is about 30 thousand > times bigger than that.
Haha! I agree! That was the header. The file size can cary from 5M to 2G.
It takes me about 6 minutes to read a 5M file right now - completely unacceptable. I was able to speed the header read from 30 sec to 7 ms by typecasting and byte swapping after I read all the data points in. I plan to use the same approach for the data points. The data points can be anything from a uint8 to a double and is determined by an uint16 ID that precedes the data point. From what I've been reading here, the typecasting takes the longest, so I plan to gather the data in arrays, then typecast, and byte swap if necessary. I'm hoping for a ~300M file to be processed in less than 10 seconds.