"Syam" wrote in message <koab3t$ql9$1@newscl01ah.mathworks.com>...
> Hi,
>
> I have a loop where I am solving thousands of linear systems using mldivide:
>
> Q = number of systems
> As = N*N*Q matrix
> Bs = N*Q matrix
> Xs = preallocated N*Q matrix
> for i=1:Q
> Xs(:,Q) = As(:,:,Q) \ Bs(:,Q);
> end
>
> And I run the loop itself hundreds of times. N is small (like 8). So each mldivide call is pretty speedy. However the accumulated time is pretty large, and I was wondering if there was a way to vectorize the computation to avoid the loop.
>
> Inspired by an earlier post on the same subject, I tried the following for fun:
>
> spAs = a sparse block diagonal matrix composed of the N*N blocks in As
> spBs = Bs(:);
> Xs = reshape(spAs \ spBs, [N Q]);
>
> So in this case, I am concatenating the various linear systems into one and solving them simultaneously. This happens to be much faster than the loop, at least for N=8, and if I create the sparse matrix efficiently (i.e. using the 5-argument form of sparse()).
>
> So my question is, is this alternate form equivalent? The single sparse mldivide gives almost the same results as the loop version, but not exactly. The difference seems more than just rounding error.
They are equivalent at numerical precision.
>
> Another piece of info the -- the matrices in As are not completely independent; the first N/2 rows of each N*N matrix in As are the same for all matrices. I thought I might be able to take advantage of that in some way, but the method I tried (block LU decomposition outside the loop, which yields a common upper triangular matrix, then solving with L inside the loop and U once outside the loop) didn't give me any benefits.
I would also be surprise if there is any benefit.
There are also few submissions of the FEX
http://www.mathworks.com/matlabcentral/fileexchange/37515-mmx-multithreaded-matrix-operations-on-n-d-matrices
http://www.mathworks.com/matlabcentral/fileexchange/24260-multiple-same-size-linear-solver
Bruno
> Hi,
>
> I have a loop where I am solving thousands of linear systems using mldivide:
>
> Q = number of systems
> As = N*N*Q matrix
> Bs = N*Q matrix
> Xs = preallocated N*Q matrix
> for i=1:Q
> Xs(:,Q) = As(:,:,Q) \ Bs(:,Q);
> end
>
> And I run the loop itself hundreds of times. N is small (like 8). So each mldivide call is pretty speedy. However the accumulated time is pretty large, and I was wondering if there was a way to vectorize the computation to avoid the loop.
>
> Inspired by an earlier post on the same subject, I tried the following for fun:
>
> spAs = a sparse block diagonal matrix composed of the N*N blocks in As
> spBs = Bs(:);
> Xs = reshape(spAs \ spBs, [N Q]);
>
> So in this case, I am concatenating the various linear systems into one and solving them simultaneously. This happens to be much faster than the loop, at least for N=8, and if I create the sparse matrix efficiently (i.e. using the 5-argument form of sparse()).
>
> So my question is, is this alternate form equivalent? The single sparse mldivide gives almost the same results as the loop version, but not exactly. The difference seems more than just rounding error.
They are equivalent at numerical precision.
>
> Another piece of info the -- the matrices in As are not completely independent; the first N/2 rows of each N*N matrix in As are the same for all matrices. I thought I might be able to take advantage of that in some way, but the method I tried (block LU decomposition outside the loop, which yields a common upper triangular matrix, then solving with L inside the loop and U once outside the loop) didn't give me any benefits.
I would also be surprise if there is any benefit.
There are also few submissions of the FEX
http://www.mathworks.com/matlabcentral/fileexchange/37515-mmx-multithreaded-matrix-operations-on-n-d-matrices
http://www.mathworks.com/matlabcentral/fileexchange/24260-multiple-same-size-linear-solver
Bruno