Shortnote

The following is a collection of functions that process matrices in LSL.

Most functions are recursive and modularly compute some matrix operation - this is purposefully done so in order to be able to dismantle and reuse individual components of the code.

Most functions used by the calculators have quadratic complexity. There may be faster numerical methods to calculate the determinant and obtain the inverse of a matrix however this implementation seeks to emulate a pen and paper solution using Laplace - for example, one could use the code to display solutions in real-time by moving the primitive or using particle effects to show the actual numbers being computed.

Video

Determinant

Computing the determinant of a matrix is performed by following a given row and computing the value of the adjugates. For example, suppose we have a $2\times2$ matrix $A$ of the form:

$$
A = 
\begin{pmatrix} 
e_{0} & e_{1} \\
e_{2} & e_{3}
\end{pmatrix}
$$

that would mean that:

$$
det(A)=e_{3}*e_{0}-e_{2}*e_{1}
$$

For larger matrices, supposing that $e_{3}$ represents a sub-matrix of $A$, its determinant has to be computed. This is performed recursively by traversing a matrix row and then calling wasDeterminant on any sub-matrix.

For a larger matrix, such as:

$$
A = 
\begin{pmatrix} 
e_{0} & e_{1} & e_{2} \\
e_{3} & e_{4} & e_{5} \\
e_{6} & e_{7} & e_{8} \\
\end{pmatrix}
$$

the element must be multiplied with the value of the sub-determinant while minding the alternation of the signs. In this case:

$$
det(A)=e_{0} * ( e_{8} * e_{4} - e_{7} * e_{5} ) - e_{1} * ( e_{8} * e_{3} - e_{6} * e_{5} ) ...
$$

Adjugate

The adjugate step consists in finding the cofactor (determinants) of every element in the matrix and replacing the original value with the cofactor of the sub-matrix. For example, for a matrix $B$:

$$
B = 
\begin{pmatrix} 
e_{0} & e_{1} & e_{2} \\
e_{3} & e_{4} & e_{5} \\
e_{6} & e_{7} & e_{8} \\
\end{pmatrix}
$$

One would take each element, ignore the row and column it appears in and compute the determinant of the sub-matrix $B$. For example, in this particular case, the element $e_{0}$ will be replaced with the value of the determinant:

$$
B_{e_{0}} = 
\left[ \begin{matrix} 
e_{4} & e_{5} \\
e_{7} & e_{8}
\end{matrix} \right]
$$

The signs alternate on rows and columns, for example, the cofactor of the second element $e_{1}$ will have a negative sign:

$$
B_{e_{1}} = -
\left[ \begin{matrix} 
e_{3} & e_{5} \\
e_{6} & e_{8} \\
\end{matrix} \right]
$$

The same applies to the cofactor of element $e_{3}$, that will have a negative sign as well:

$$
B_{e_{3}} = -
\left[ \begin{matrix} 
e_{1} & e_{2} \\
e_{7} & e_{8}
\end{matrix} \right]
$$

By going through every element of the matrix and replacing each element with value of the determinants of the sub-matrix (also called the co-factor), we obtain a new matrix that has to finally be transposed in order to finally obtain the adjugate of the matrix.

One can note that at the time of writing the cofactor function does not bother with applying signs to the values - this is performed later on in by the adjugate or determinant functions.

Scalar Product

For a multiplication between a scalar $p$ and a matrix $A$ so that $C=p*A$:

$$
C = p * 
\begin{pmatrix} 
a_{1,1} & a_{1,2} & a_{...} & a_{1,j} \\
a_{2,1} & a_{2,2} & a_{...} & a_{2,j} \\
\cdots & \cdots & \cdots & \cdots \\
a_{i,1} & a_{i,2} & a_{...} & a_{i,j} \\
\end{pmatrix}_{i,j}
=
\begin{pmatrix} 
p*a_{1,1} & p*a_{1,2} & p*a_{...} & p*a_{1,j} \\
p*a_{2,1} & p*a_{2,2} & p*a_{...} & p*a_{2,j} \\
\cdots & \cdots & \cdots & \cdots \\
p*a_{i,1} & p*a_{i,2} & p*a_{...} & p*a_{i,j} \\
\end{pmatrix}_{i,j}
$$

we multiply the scalar $p$ with every element in $A$ in order to obtain the resulting matrix $C$.

Dot Product

In order to multiply two matrixes, the number of columns of the first matrix $A$ has to be equal to the number of rows of the second matrix $B$.

$$
\begin{pmatrix} 
a_{1,1} & a_{1,2} & a_{...} & a_{1,j} \\
a_{2,1} & a_{2,2} & a_{...} & a_{2,j} \\
\cdots & \cdots & \cdots & \cdots \\
a_{i,1} & a_{i,2} & a_{...} & a_{i,j} \\
\end{pmatrix}_{i,j}
\times
\begin{pmatrix} 
b_{1,1} & b_{1,2} & b_{...} & b_{1,k} \\
b_{2,1} & b_{2,2} & b_{...} & b_{2,k} \\
\cdots & \cdots & \cdots & \cdots \\
b_{j,1} & b_{j,2} & b_{...} & b_{j,k} \\
\end{pmatrix}_{j,k}
=
\begin{pmatrix} 
c_{1,1} & c_{1,2} & c_{...} & c_{1,k} \\
c_{2,1} & c_{2,2} & c_{...} & c_{2,k} \\
\cdots & \cdots & \cdots & \cdots \\
c_{i,1} & c_{i,2} & c_{...} & c_{i,k} \\
\end{pmatrix}_{i,k}
$$

in this case, we have $A_{i,j}$ and $B_{j,k}$ with $j=j$. For every element in $C_{i,k}$ we multiply elements from the rows of $A$ with elements from the column of $B$ and add them up to obtain the elements in $C$.

\begin{eqnarray*}
c_{1,1} &=& a_{1,1}*b_{1,1} + a_{1,2}*b_{2,1} + \cdots + a_{1,j}*b_{j,1} \\
c_{1,2} &=& a_{1,1}*b_{1,2} + a_{1,2}*b_{2,2} + \cdots + a_{1,j}*b_{j,2} \\
&\cdots& \\
c_{i,1} &=& a_{i,1}*b_{1,1} + a_{i,2}*b_{2,1} + \cdots + a_{i,j}*b_{j,1} \\
&\cdots& \\
c_{i,k} &=& a_{i,1}*b_{1,k} + a_{i,2}*b_{2,k} + \cdots + a_{i,j}*b{j,k}
\end{eqnarray*}

Programming Notes

Oh no, not this shit again!

Although the operations are straight-forward, writing a program to perform them is non-trivial because it implies deep recursion for very large matrices - especially when computing the adjugate. This implies quadratic time, and even more so since every element will have to be handled and every sub-matrix will have to be processed. Thus, very large matrices will take a long time to compute due to the restrictions of LSL. Another difficulty is that LSL does not have an equivalent of multi-dimensional arrays and a matrix can be expressed at best as a flattened list. Given sufficient time, the implementation succeeds at some point and manages to compute the inverse for matrices of large dimensions.

Index


secondlife/matrices.txt · Last modified: 2017/02/22 18:22 (external edit)

Access website using Tor


For the copyright, license, warranty and privacy terms for the usage of this website please see the license and privacy pages.