You are a guest. Restricted access. Read more

This shows you the differences between two versions of the page.

jmathlab:matrices [2012/11/10 22:54] admin |
jmathlab:matrices [2015/05/10 17:18] (current) |
||
---|---|---|---|

Line 1: | Line 1: | ||

+ | ====== Matricies ====== | ||

+ | {{indexmenu_n>70}} | ||

+ | ~~NOCACHE~~ | ||

+ | |||

+ | |||

+ | Matrices are handled in a similar way, only with two indices for row number (first index) and column number (second index). Rows are separated by either a semicolon or a linefeed during input. | ||

+ | |||

+ | <jc math> | ||

+ | M=[1:3 ; 4:6 ; 7:9] | ||

+ | printf('M=%f\n',M) | ||

+ | a=M([1 3],:) | ||

+ | printf('a=%f\n',a) | ||

+ | C=M<4 | ||

+ | printf('C=%f',C) | ||

+ | </jc> | ||

+ | |||

+ | All operators be applied to vectors and matrices. If scalar, per-element operation is desired, some operators (* / ^) must be preceded by a point to distinguish them from the quite different linear-algebra versions of these operations (see chapter 2.9). Further useful functions are sum(vector) and prod(vector) which return the sum and product of the vectors elements. | ||

+ | |||

+ | The following matrix operations are available: | ||

+ | |||

+ | ^ symbol ^ operation ^ | ||

+ | | + | addition | | ||

+ | | - | subtraction | | ||

+ | | * | multiplication | | ||

+ | | ^ | power | | ||

+ | | ' | transpose | | ||

+ | | \ | left division | | ||

+ | | / | right division | | ||

+ | |||

+ | These operations are also applicable for vectors. These matrix operations apply to scalars (1-by-1 matrices) as well. If the sizes of the matrices are incompatible for the matrix operation, an error message will result, except in the case of scalar-matrix operations (for addition, subtraction, and division as well as for multiplication) in which case each entry of the matrix is operated on by the scalar. | ||

+ | |||

+ | |||

+ | |||

+ | |||

+ | The colon notation can be used to access sub-matrices of a matrix. For example, | ||

+ | |||

+ | |||

+ | <jc math> | ||

+ | M=[1:10; 1:10; 1:10; 1:10; ] | ||

+ | A=M(1:4,3) | ||

+ | printf('%f',A) | ||

+ | </jc> | ||

+ | is the column vector consisting of the first four entries of the third column of A. A colon by itself denotes an entire row or column: | ||

+ | |||

+ | <code> | ||

+ | A(:,3) | ||

+ | </code> | ||

+ | |||

+ | is the third column of A, and A(1:4,:) is the first four rows. Arbitrary integral vectors can be used as subscripts: | ||

+ | |||

+ | <code> | ||

+ | A(:,[2 4]) | ||

+ | </code> | ||

+ | contains as columns, columns 2 and 4 of A. Such subscripting can be used on both sides of an assignment statement: | ||

+ | |||

+ | <code> | ||

+ | A(:,[2 4 5]) = B(:,1:3) | ||

+ | </code> | ||

+ | replaces columns 2,4,5 of b with the first three columns of B. Note that the entire altered matrix A is printed and assigned. Try it. | ||

+ | |||

+ | Columns 2 and 4 of A can be multiplied on the right by the 2-by-2 matrix [1 2;3 4]: | ||

+ | |||

+ | <code> | ||

+ | A(:,[2,4]) = A(:,[2,4])*[1 2;3 4] | ||

+ | </code> | ||

+ | Once again, the entire altered matrix is printed and assigned. | ||

+ | |||

+ | |||

+ | |||

+ | |||

+ | |||

+ | ====== Operations ====== | ||

+ | |||

+ | Here is the list of matrix and vector operations: | ||

+ | |||

+ | ^ Name(Arguments) ^ Function ^ | ||

+ | | linspace($var_1$,$var_2$,COUNT) | vector with COUNT numbers ranging from $var_1$ to $var_2$ | | ||

+ | | length($vector$) | number of elements in $vector$ | | ||

+ | | zeros(ROWS[,COLUMNS]) | matrix of zeros | | ||

+ | | ones(ROWS[,COLUMNS]) | matrix of ones | | ||

+ | | eye(ROWS[,COLUMNS]) | matrix with diagonal one | | ||

+ | | rand(ROWS[,COLUMNS]) | matrix of random numbers | | ||

+ | | hilb(RANK) | Hilbertmatrix | | ||

+ | | invhilb(RANK) | Inverse Hilbertmatrix | | ||

+ | | size($matrix$) | number of rows and columns | | ||

+ | | sum($var$) | if $var$ is a vector: sum of elements, if $var$ is a matrix: sum of columns | | ||

+ | | find($var$) | indices of non-vanishing elements | | ||

+ | | max($var$) | largest element in $var$ | | ||

+ | | min($var$) | smallest element in $var$ | | ||

+ | | diag($var$,[OFFSET]) | if $var$ is a vector: matrix with $var$ as diagonal, if $var$ is matrix: diagonal as vector | | ||

+ | | det($matrix$) | determinant | | ||

+ | | eig($matrix$) | eigenvalues | | ||

+ | | inv($matrix$) | inverse matrix| | ||

+ | | inversematrix($matrix$) | inverse matrix | | ||

+ | | pinv($matrix$) | pseudoinverse | | ||

+ | | lu($matrix$) | LU-decomposition | | ||

+ | | svd($matrix$) | singular value decomposition (Lapack) | | ||

+ | | qr($matrix$) | QR-decomposition (Lapack) | | ||

+ | | eigen($matrix$) | eigenvalues (Lapack) | | ||

+ | |||

+ | For example, zeros(m,n) produces an m-by-n matrix of zeros and zeros(n) produces an n-by-n one. | ||

+ | |||

+ | ====== Functions ====== | ||

+ | Several standard matrices are created by means of functions without specifying individual elements: **ones(n,m), zeros(n,m), rand(n,m)** return matrices with elements 1, 0 or random numbers between 0 and 1. eye(n,m) has diagonal elements 1, else 0, and **hilb(n)** creates the n-th degree Hilbert-matrix. | ||

+ | |||

+ | <jc math> | ||

+ | A=rand(1,3) | ||

+ | printf('%f',A) | ||

+ | B=hilb(4) | ||

+ | printf('%f',B) | ||

+ | </jc> | ||

+ | |||

+ | The following functions are provided for matrix calculations: **diag(x)** (extracts diagonal elements), **det(x)** (determinante), **eig(x)** (eigenvalues), **inv(x)** (inverse), **pinv(x)** (pseudo-inverse). The adjunct matrix is created using the operator'. | ||

+ | |||

+ | <jc math> | ||

+ | A=det(hilb(4)) % determinant | ||

+ | printf('%f\n',A) | ||

+ | M=[2 3 1; 4 4 5; 2 9 3]; | ||

+ | A=eig(M) | ||

+ | printf('eig=%f\n',A) | ||

+ | A=inversematrix(M) | ||

+ | printf('Inverse=%f\n',A) | ||

+ | </jc> | ||

+ | |||

+ | The nontrivial functions are all based on the LU-decomposition, which is also accessible as a function call lu(x). It has 2 or 3 return values, therefor the left side of the equation must provide multiple variables, see example below: | ||

+ | |||

+ | <jc math> | ||

+ | M=[2 3 1; 4 4 5; 2 9 3] | ||

+ | [l,u,p]=lu(M) % 2 or 3 return values | ||

+ | </jc> | ||

+ | |||

+ | Without preceding point the arithmetic operators function as matrix operators, e.g. * corresponds to matrix and vector multiplication. | ||

+ | |||

+ | <jc math> | ||

+ | x=[2 1 4]; y=[3 5 6]; | ||

+ | c=x.*y % with point | ||

+ | printf('x*y=%f\n',c) | ||

+ | c=x*y % without point | ||

+ | printf('x*y=%f\n',c) | ||

+ | </jc> | ||

+ | |||

+ | If one of the arguments is a scalar datatype, the operation is repeated for each element of the other argument: | ||

+ | |||

+ | <jc math> | ||

+ | x=[2 1 4]; | ||

+ | c=x+3 | ||

+ | printf('x+3=%f\n',c) | ||

+ | </jc> | ||

+ | |||

+ | Matrix division corresponds to multiplication by the pseudo-inverse. Using the operator backslash leads to left-division, which can be used to solve systems of linear equations: | ||

+ | |||

+ | <jc math> | ||

+ | M=[2 3 1; 4 4 5; 2 9 3]; | ||

+ | b=[0;3;1]; | ||

+ | x=M\b % solution of M*x = b | ||

+ | printf('M\b=%f\n',x) | ||

+ | c=M*x % control | ||

+ | printf('M*x=%f\n',x) | ||

+ | </jc> | ||

+ | |||

+ | Systems of linear equations can (and should) be solved directly with the function linsolve(A,b). | ||

+ | |||

+ | ===== Lapack ===== | ||

+ | |||

+ | The application contains JLAPACK [9], the Java-port of the LAPACK [10]-routines with extended and better algorithms for matrix calculations. However, these are limited to matrices with real coefficients in floating point format. The LAPACK routines are accessed by the following functions: | ||

+ | |||

+ | **svd(A)** Singular value decomposition of A (1 or 3 return values). | ||

+ | |||

+ | <jc math> | ||

+ | A=[2 3 1; 4 4 5; 2 9 3]; | ||

+ | a=svd(A) | ||

+ | printf('M*x=%f\n',a) | ||

+ | </jc> | ||

+ | |||

+ | **qr(A)** QR-decomposition of A (2 return values). | ||

+ | |||

+ | <jc math> | ||

+ | A=[2 3 1; 4 4 5; 2 9 3]; | ||

+ | [q,r]=qr(A) | ||

+ | printf('eigen=%f\n',qr(A)) | ||

+ | </jc> | ||

+ | |||

+ | **linsolve2( A, b)** | ||

+ | |||

+ | Solves $A\cdot x=b$ (1 return value). Example in chapter 2.13.1. | ||

+ | |||

+ | **linlstsq(A, b)** Solves $A\cdot x=b$, overdetermined (1 return value). | ||

+ | |||

+ | |||

+ | **eigen(A)** Eigenvalues of A (1 return value). | ||

+ | |||

+ | <jc math> | ||

+ | A=[2 3 1; 4 4 5; 2 9 3]; | ||

+ | a=eigen(A) | ||

+ | printf('eigen=%f\n',a) | ||

+ | </jc> | ||

+ | |||

+ | ===== LAPACK vs Jasymca ==== | ||

+ | |||

+ | We calculate the 4-th degree regression polynomial for the following x,y-data: | ||

+ | |||

+ | <jc math> | ||

+ | x=[1:6],y=x+1 | ||

+ | printf('x=%f\n',x) | ||

+ | printf('y=%f\n',y) | ||

+ | a=polyfit(x,y,4) | ||

+ | printf('polyfit=%f\n',a) | ||

+ | </jc> | ||

+ | |||

+ | The coefficients p(1),p(2),p(3) should vanish since x and y represent a perfect straight line. This is an unstable problem, and it can be easily extended to make Jasymca completely fail. In our second attempt we use the Lapack-routine linlstsq: | ||

+ | |||

+ | <jc math> | ||

+ | x=[1:6],y=x+1; | ||

+ | l=length(x);n=4; | ||

+ | X=(x'*ones(1,n+1)).^(ones(l,1)*(n:-1:0)) | ||

+ | a=linlstsq(X,y') | ||

+ | printf('%f\n',a) | ||

+ | </jc> | ||

+ | |||

+ | The coefficients p(1),p(2),p(3) are now significantly smaller. This particular problem can be solved exactly using Jasymca-routines and exact numbers, which avoids any rounding errors: | ||

+ | |||

+ | <jc math> | ||

+ | x=rat([1:6]);y=x+1; | ||

+ | a=polyfit(x,y,4) | ||

+ | printf('%f',a) | ||

+ | </jc> | ||

+ | |||

+ | ;;# | ||

+ | [[jmathlab:programming |{{::next.png}}]] | ||

+ | ;;# | ||

+ | |||

+ | |||

+ | |||

+ | |||

+ | |||