Financial algebra examples

Financial algebra examples DEFAULT

DMCA Complaint

If you believe that content available by means of the Website (as defined in our Terms of Service) infringes one or more of your copyrights, please notify us by providing a written notice (“Infringement Notice”) containing the information described below to the designated agent listed below. If Varsity Tutors takes action in response to an Infringement Notice, it will make a good faith attempt to contact the party that made such content available by means of the most recent email address, if any, provided by such party to Varsity Tutors.

Your Infringement Notice may be forwarded to the party that made the content available or to third parties such as ChillingEffects.org.

Please be advised that you will be liable for damages (including costs and attorneys’ fees) if you materially misrepresent that a product or activity is infringing your copyrights. Thus, if you are not sure content located on or linked-to by the Website infringes your copyright, you should consider first contacting an attorney.

Please follow these steps to file a notice:

You must include the following:

Send your complaint to our designated agent at:

Charles Cohn Varsity Tutors LLC
101 S. Hanley Rd, Suite 300
St. Louis, MO 63105

Or fill out the form below:

 

Sours: https://www.varsitytutors.com/finite_mathematics-help/mathematics-of-finance

Financial Algebra - 45

Perception is strong and sight is weak. In strategy, it is important to see distant things as if they were close and to take a distanced view of close things. Miyamoto Musashi, Japanese Samaurai, Artist, and Strategist Stock Splits 1-8 Key Terms * stock split * outstanding shares * market capitalization or market cap * traditional stock split * reverse stock split Given: One half of x minus one half of y equals ؊4. 1. Write an equation. 2. Solve the equation for x. * penny stock * fractional part of a share CCSS Warm-Up 3. Solve the equation for y. Why do corporations split stocks? Suppose that someone approaches you to give you two ten-dollar bills in exchange for a twenty-dollar bill. That might appear to be a worthless transaction because the value of the exchanged monies is the same. Having two ten-dollar bills might better suit one party and having a single twenty dollar bill might better suit the other. This is exactly what happens when a corporation offers its shareholders a stock split. To understand what happens when stocks split, it is first necessary to understand two important and related terms, outstanding shares and market capitalization. Outstanding shares are the total number of all shares issued by a corporation that are in investors' hands. Market capitalization, or market cap, is the total value of all of a company's outstanding shares. When a stock is split, a corporation changes the number of outstanding shares while at the same time adjusts the price per share so that the market cap remains unchanged. In the opening situation, the number of bills doubled, while the value of each bill was halved. The total value of twenty dollars remained unchanged. Why would a corporation institute a split if it is a monetary nonevent? Many say that the reason is perception. The psychology of a split depends on the type of split. In a traditional stock split, the value of a share and the number of shares are changed in such a proportional way that the value decreases as the number of shares increases while the market cap remains the same. These types of splits are announced in the form a for b where a is greater than b. For example, one of the most common traditional splits is the 2-for-1 split. The investor gets two shares for every one share held while the price per share is cut in half. Although nothing has changed in the market value of the shares, the perception is that the investor sees the stock as more affordable. Investors may be attracted to this stock because the market price per share has been lowered, and they can afford to buy more shares. In a reverse stock split, the effect is just the opposite. The number of outstanding shares is reduced and the market price per share is increased. As the price per share increases, the investor perceives 1-8 Objectives * Calculate the postsplit outstanding shares and share price for a traditional split. * Calculate the post-split outstanding shares and share price for a reverse split. * Calculate the fractional value amount that a shareholder receives after a split. Common Core A-CED1, A-REI3 Stock Splits 45

Table of Contents for the Digital Edition of Financial Algebra

Contents
Financial Algebra - Cover1
Financial Algebra - Cover2
Financial Algebra - i
Financial Algebra - ii
Financial Algebra - iii
Financial Algebra - iv
Financial Algebra - v
Financial Algebra - Contents
Financial Algebra - vii
Financial Algebra - viii
Financial Algebra - ix
Financial Algebra - x
Financial Algebra - xi
Financial Algebra - xii
Financial Algebra - xiii
Financial Algebra - xiv
Financial Algebra - xv
Financial Algebra - xvi
Financial Algebra - xvii
Financial Algebra - 2
Financial Algebra - 3
Financial Algebra - 4
Financial Algebra - 5
Financial Algebra - 6
Financial Algebra - 7
Financial Algebra - 8
Financial Algebra - 9
Financial Algebra - 10
Financial Algebra - 11
Financial Algebra - 12
Financial Algebra - 13
Financial Algebra - 14
Financial Algebra - 15
Financial Algebra - 16
Financial Algebra - 17
Financial Algebra - 18
Financial Algebra - 19
Financial Algebra - 20
Financial Algebra - 21
Financial Algebra - 22
Financial Algebra - 23
Financial Algebra - 24
Financial Algebra - 25
Financial Algebra - 26
Financial Algebra - 27
Financial Algebra - 28
Financial Algebra - 29
Financial Algebra - 30
Financial Algebra - 31
Financial Algebra - 32
Financial Algebra - 33
Financial Algebra - 34
Financial Algebra - 35
Financial Algebra - 36
Financial Algebra - 37
Financial Algebra - 38
Financial Algebra - 39
Financial Algebra - 40
Financial Algebra - 41
Financial Algebra - 42
Financial Algebra - 43
Financial Algebra - 44
Financial Algebra - 45
Financial Algebra - 46
Financial Algebra - 47
Financial Algebra - 48
Financial Algebra - 49
Financial Algebra - 50
Financial Algebra - 51
Financial Algebra - 52
Financial Algebra - 53
Financial Algebra - 54
Financial Algebra - 55
Financial Algebra - 56
Financial Algebra - 57
Financial Algebra - 58
Financial Algebra - 59
Financial Algebra - 60
Financial Algebra - 61
Financial Algebra - 160
Financial Algebra - Cover3
Financial Algebra - Cover4
https://www.nxtbook.com/nxtbooks/ngsp/financialalgebra_advanced2ndedition
https://www.nxtbook.com/nxtbooks/ngsp/precalculus_sampler
https://www.nxtbook.com/nxtbooks/ngsp/financialalgebra
https://www.nxtbook.com/nxtbooks/ngsp/basicstatistics6
https://www.nxtbook.com/nxtbooks/ngsp/trigonometry_9e
https://www.nxtbook.com/nxtbooks/ngsp/statistics_learningfromdata
https://www.nxtbook.com/nxtbooks/ngsp/algebraicmodeling6
https://www.nxtbook.com/nxtbooks/ngsp/precalculus_withlimits
https://www.nxtbook.com/nxtbooks/ngsp/calculus_singlevariable
https://www.nxtbook.com/nxtbooks/ngsp/precalculus_withlimits_graphing
https://www.nxtbook.com/nxtbooks/ngsp/collegeprep_algebra
https://www.nxtbookmedia.com
Sours: https://www.nxtbook.com/fx/devices/cpa/min/4.7.101/
  1. Claas hay mowers
  2. Naruto glass cup
  3. Five brothers tires
  4. Cute japanese logos
  5. Ensemble stars spotify

Chapter 8 Linear algebra

While I’m assuming you’ve encountered vectors and matrices in previous math classes, we’ll start with a short review. Then in the next few chapters, we’ll cover elements of linear algebra, multivariable calculus, and differential equations that provide a nice base for financial math. Financial information is almost always multivariate: as a portfolio manager, you manage multiple assets; as a risk analyst, you look at multiple risks. Combining these classical multivariate topics with statistics will give you access to powerful mathematical techniques.

8.1 What is a vector? What is a matrix?

Most generally, a vector is an element of a vector space. Often, we care about the vector spaces \( \mathbb{R}^n \) or \( \mathbb{C}^n \), in which case

  • the vector is an \( n \)-tuple of real or complex numbers (a list in which order matters), or
  • (equivalently) a direction with a magnitude.

There are examples that are rather different, but we’ll save those for discussion later.

We’ll use the notation \( \vec{v} \) for a vector, and can write for instance

\[ \vec{v} = [v_1, \ldots, v_n] \in \mathbb{R}^n \]

for a vector whose components \( v_i \) for \( i=1,\ldots, n \) are all real numbers.

A vector space or linear space is a set of vectors that is closed under scalar multiplication and vector addition. This word “scalar” refers to a number that scales things – a stretch factor – and so for a real vector space, a scalar is a real number, and for a complex vector space, a scalar is a complex number. Notice that any vector space must include the zero vector (which we write \( \vec{0} \)) because if scaling a vector by any scalar is an operation that keeps the output in the vector space, we’ve got to be able to scale by zero.

Here are some examples of vectors and vector spaces:

Example 1.1 Think of all the vectors in \( \mathbb{R}^3 \) that have zero for their \( z \)-coordinate. Call that vector space \( V \), and write \( V \) as

\[ V=\{\vec{v} = [x,y,z] \in \mathbb{R}^3 | z=0\}. \]

Check that you can add or subtract two such vectors and still have \( z=0 \). Check that you can multiply the vector by any number in \( \mathbb{R} \) and stay in \( V \). Remember multiplication of a vector by a scalar is done element by element, so \( c [x,y,0] = [cx,vy,c\cdot 0] = [cx,cy,0] \).

Example 1.2 Another vector space \( W \subset \mathbb{R}^3 \) is given by all the vectors \( \vec{v} = [x,y,z] \) such that \( x+y+z = 0 \). You can graph this. Do you know what surface this gives in \( \mathbb{R}^3 \)? Vectors in this vector space include \( [1,2,-3] \) and \( [-\pi, 38, -38+\pi] \). Check for yourself that this vector space is closed under scalar multiplication and vector addition (that is, for all \( \vec{v}, \vec{w} \in W \), \( a\vec{v}+b\vec{w} \in W \) for scalars \( a, b \in \mathbb{R} \)).

If a vector space is contained in another vector space, we say it is a vector subspace of the larger vector space. In our examples immediately above, both \( V \) and \( W \) are vector subspaces of \( \mathbb{R}^3 \), but since neither \( V \) nor \( W \) contains the other one, neither is a vector subspace of the other. However, both \( V \) and \( W \) contain the intersection \( V \cap W \), the set of vectors \( \vec{v} = [x,y,z] \) in \( \mathbb{R}^3 \) with \( z=0 \) and \( x+y+z=0 \). That vector space, \( Q \), is given by

\[ Q = \{ [x,y,z] \in \mathbb{R}^3 \;| \; x+y = 0, z=0 \}. \]

In fact, that’s a line through the origin given by \( x+y=0 \) and \( z=0 \), and we could rewrite it as

\[ Q = \{ [x,-x,0] \in \mathbb{R}^3\} . \]

If you like parameterizations (I do!) we could use a parameter \( t \in \mathbb{R} \) and write this yet one more way:

\[ Q = \{ t[1,-1,0] \in \mathbb{R}^3| t \in \mathbb{R}\} . \]

What is a matrix? Mechanically, a matrix is made by stacking vectors as rows or columns to make a rectangular array of numbers. In this book we’ll most frequently encounter matrices of numbers, but we also make matrices of symbols or expressions (you’ll notice this in the section on rotation matrices, for instance).

An example of a matrix of real numbers would be

\[ \begin{bmatrix} 1 & 2 & 3 \\ -2 & \pi & -1.5\end{bmatrix}. \]

This is a two by three matrix. An example of a matrix of complex numbers would be

\[ \begin{bmatrix} i & 2 & 3-i \\ 0 & i \pi & -1.5i \end{bmatrix}. \]

This is also a \( 2 \times 3 \) matrix. When we need a very general \( m \times n \) (\( m \) by \( n \)) matrix called \( A \), we can write

\[ A = \begin{bmatrix} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \ldots & a_{2n} \\ \vdots & &\ddots & \vdots \\ a_{m1} & a_{m2} & \ldots & a_{mn} \end{bmatrix} . \]

You can multiply matrices by scalars (just multiply each element by the scalar) and you can add matrices (also element by element). The observant among you might think, hmm, does that mean we can make a vector space out of matrices? Yes, you can! The vector space of all \( m \times n \) matrices with real entries is often called \( Mat_{m\times n}(\mathbb{R}) \), or \( M_{m\times n} \), or \( \mathbb{R}^{m \times n} \). Somehow you have to indicate the dimensions of the matrix and what the entries are allowed to be.

8.2 Linear combinations and matrix multiplication

8.2.1 Linear combinations

First, let’s look at the idea of linear combinations of vectors and relate it to matrices. Here’s a very simple motivating example:

The equation \( 2x+y-4z+w \) is a linear combination of the variables \( x,y,z \) and \( w \). It can be expanded as a dot product,

\[ [x,y,z,w] \cdot [2,1,-4,1], \]

or as a product of matrices,

\[ \begin{bmatrix} 2& 1 & -4 & 1 \end{bmatrix}\begin{bmatrix} x \\y\\z\\w\end{bmatrix}. \]

Here is another motivating example: An example of a linear combination of the vectors \( \begin{bmatrix} 3 \\ 2 \\ 1\end{bmatrix} \) and \( \begin{bmatrix} -7 \\ 2 \\ 4\end{bmatrix} \) is given by

\[ 3\begin{bmatrix} 3 \\ 2 \\ 1\end{bmatrix} + \begin{bmatrix} -7 \\ 2 \\ 4\end{bmatrix} = \begin{bmatrix} 2 \\ 8 \\ 7\end{bmatrix}. \]

Using matrix multiplication, we could also write

\[ \begin{bmatrix} 3 & -7 \\2 & 2 \\ 1& 4\end{bmatrix} \begin{bmatrix} 3 \\ 1\end{bmatrix} = \begin{bmatrix} 2 \\ 8 \\ 7\end{bmatrix}. \]

8.2.2 Matrix multiplication and dot products

Linear combinations of vectors can always be expressed via matrix multiplication, and matrix multiplication is built out of dot products. The dot product of two real vectors \( \vec{a} \) and \( \vec{b} \) in \( \mathbb{R}^n \) is

\[ \vec{a} \cdot \vec{b} = [a_1, \ldots, a_n ] \cdot [b_1, \ldots, b_n] = \sum_{i=1}^n a_i b_i. \]

We are only defining dot product for real vectors right now (complex vectors will show up in Section 9.10). For real vectors, \( \vec{a} \cdot \vec{b} = \vec{b} \cdot \vec{a} \), so dot product is commutative (not true for complex inner product!). Multiplication by scalars (real numbers) distributes over dot product, too: \( \vec{a} \cdot (c \vec{b}) = c (\vec{a} \cdot \vec{b} ) = (c \vec{a}) \cdot \vec{b} \). Often with commutativity we talk about associativity (for instance \( (3+2)+1 = 3+(2+2) \)), but for dot product this does not make sense: \( \vec{a}\cdot(\vec{b} \cdot \vec{c}) \) is not an operation that makes sense, as you can’t dot a vector and a scalar.

Geometrically, the dot product \( \vec{a} \cdot \vec{b} \) is related to the angle between the vectors \( \vec{a} \) and \( \vec{b} \). Pick the smallest possible angle between the two vectors, \( \theta \) between zero and \( \pi \). Define

\[ \vec{a} \cdot \vec{b} = ||\vec{a}|| ||\vec{b}|| \cos \theta. \]

If \( \vec{a} \) and \( \vec{b} \) are perpendicular to each other, then the angle between them is \( \pi/2 \) radians and \( \vec{a} \cdot \vec{b}=0 \).

This uses the magnitude of \( \vec{a} \) and \( \vec{b} \): for \( \vec{a} \in \mathbb{R}^n \), we can define the magnitude \( ||\vec{a}|| \) by

\[ ||\vec{a}|| = \sqrt{a_1^2 + a_2^2 + \ldots + a_n^2}. \]

This should look a lot like Euclidean distance to you (if you’re not sure, just think about \( \vec{a} \in \mathbb{R}^2 \)). It is exactly that, the distance from the tail of \( \vec{a} \) at the origin to the tip of the vector \( \vec{a} \). Notice too then that \( \vec{a} \cdot \vec{a} = ||\vec{a}||^2 \). This will be useful.

Matrix multiplication, then, is built from dot products as follows: Let \( C \) be an \( m \times n \) matrix with rows given by \( \vec{c}_1 \) through \( \vec{c}_m \). Let \( D \) be an \( n \times p \) matrix with columns given by \( \vec{d}_1 \) through \( \vec{d}_p \). We can write the matrix multiplication as

\[ CD = \begin{bmatrix} \leftarrow & \vec{c}_1 & \rightarrow \\ \leftarrow & \vec{c}_2 & \rightarrow \\ & \vdots & \\ \leftarrow & \vec{c}_m & \rightarrow \end{bmatrix}\begin{bmatrix} \uparrow & \uparrow & & \uparrow \\ \vec{d}_1 & \vec{d}_2 &\ldots & \vec{d}_p\\ \downarrow &\downarrow & &\downarrow \end{bmatrix} = \begin{bmatrix} \vec{c}_1 \cdot \vec{d}_1 & \vec{c}_1 \cdot \vec{d}_2 & \ldots & \vec{c}_1 \cdot \vec{d}_p \\ \vec{c}_2 \cdot \vec{d}_1 & \vec{c}_2 \cdot \vec{d}_2 & \ldots & \vec{c}_2 \cdot \vec{d}_p \\ \vdots & &\ddots & \vdots \\ \vec{c}_m \cdot \vec{d}_1 & \vec{c}_m \cdot \vec{d}_2 & \ldots & \vec{c}_m \cdot \vec{d}_p\end{bmatrix} \]

Make sure you know the difference between a row vector and column vector! A row vector looks like

\[ \vec{r} = [r_1, r_2, \ldots, r_m] \in \mathbb{R}^m, \]

for instance, while a column vector might be

\[ \vec{c} = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{bmatrix} \in \mathbb{R}^n. \]

Often people (like me) are rather sloppy and switch back and forth between writing a vector in \( \mathbb{R}^n \) as a row or column vector depending on how much paper they have. That’s not terrible. But being clear in calculations about whether you’re using a row vector or column vector is important. For instance, what is the difference between multiplying two vectors as matrices and taking the dot product between two vectors with the same shape?

Example 2.1 Let \( \vec{v} = [1,2] \) and \( \vec{w} = [-1,1] \). Then compute the following, with the knowledge that the \( ^T \) stands for transpose (exchange rows and columns, or flip over the diagonal):

\[ \vec{v} \cdot \vec{w} \]

\[ \vec{v} \vec{w} \]

\[ \vec{v}^T \vec{w} \]

\[ \vec{v} \vec{w}^T \]

Now check your answers in the footnote.

8.3 Geometry and linear algebra

Linear algebra is, unsurprisingly, rather... linear. But it’s got a lot of geometry going on with all those angles and lengths and volumes. In this section we’ll discuss planes and parametrizations as a great first example, then tackle determinants, cross products, and projection. We’ll focus on the geometric meaning as a means of beginning the integration of linear algebra with multivariable calculus. Some of these techniques are specific to low dimensions (\( \mathbb{R}^2 \) and \( \mathbb{R}^3 \)) but they can give intuition to the generalizations to higher dimensions. As high-dimensional data analysis becomes an ever more important part of the landscape of mathematics, finance, and industry, this is useful!

8.3.1 Planes and parameterizations

In this section we’ll talk about equations of planes, but maybe it’s good to start with the equation of a line as a test case. You can write the equation of a line \( L \) in \( \mathbb{R}^3 \) in several ways. One of my favorite ways to write a line in \( \mathbb{R}^3 \) is to parameterize it. I will use the parameter \( t \in \mathbb{R} \). It’s just a scalar. Then we need a direction for the line – call it \( \vec{v} \) – and a point that lies in the line – call that \( \vec{p} \). A set-theoretic description of the points in the line, then, is

\[ L = \{\vec{x} \in \mathbb{R}^3 | \vec{x} = t \vec{v} + \vec{p} \}. \]

When \( t=0 \) we’re just at the point \( \vec{p} \), and as \( t \) ranges through the rest of \( \mathbb{R} \), we just scale along the vector \( \vec{v} \).

images/ParameterizationOfLine

This concept of parameterization will be very useful in exploring planes and other multivariate curves, surfaces, solids, etc. Among other things, parametrization can help us understand the notion of dimension both in linear and nonlinear contexts.

We can use matrix multiplication or dot product to write the equation of a plane in \( \mathbb{R}^3 \). For instance, the equation \( 3x-y+2z=5 \) can be rewritten as \( (3,-1,2) (x,y,z)^T = 5 \). This is a logical condition – a constraint – that picks out a certain set of points in \( \mathbb{R}^3 \). The plane given by \( 3x-y+2z=5 \) is an affine subspace of \( \mathbb{R}^3 \), not a vector space in \( \mathbb{R}^3 \). Why? First, check to see if the space \( \{ [x,y,z] \in \mathbb{R}^3 \; | \; 3x-y+2z=5\} \) is closed under scalar multiplication and vector addition. (Remember zero is a scalar!) What do you find? Second, ponder this question:

Example 3.1 What is the difference between a point in this plane and a vector in this plane? Find an example of a point in the plane and a vector lying in the plane.

You probably found your own examples, but \( (0,-5,0) \) and \( (0,0,2.5) \) are two points in the plane – they satisfy the equation \( 3x-y+2z=5 \). By contrast, \( [0,-5, -2.5] \) is a vector in the plane. A few different ways to say that: it’s a vector that lies parallel to the plane when based at the origin, so we can translate it to lie in the plane; it is a vector that goes between two points in the plane; it’s a vector perpendicular to the normal vector \( (3,-1,2) \), which is enough to characterize the plane since we’re in \( \mathbb{R}^3 \).

A plane can be parametrized using two variables, for instance \( s \) and \( t \), because it’s a two-dimensional object. For example, let \( x=s \), \( y=t \), and \( z= 2.5-1.5s+0.5t \). Notice this satisfies our previous Cartesian equation, \( 3x-y+2z=5 \)! We can write the parametric vector-valued equation like this:

\[ \begin{pmatrix} x \\ y \\z \end{pmatrix} = \begin{pmatrix} s \\ t\\ 2.5-1.5s+0.5t \end{pmatrix}. \]

If we wish to drop the \( x,y,z \) and write a function \( f: \mathbb{R}^2 \rightarrow \mathbb{R}^3 \), we can write the below expressions as well as the previous:

\[ f(s,t) = s \begin{pmatrix} 1 \\ 0\\ -1.5 \end{pmatrix}+t \begin{pmatrix} 0 \\ 1\\ 0.5 \end{pmatrix}+\begin{pmatrix} 0 \\ 0\\ 2.5 \end{pmatrix} \]

or

\[ f(s,t) = \begin{pmatrix} 1 & 0 \\ 0 & 1\\ -1.5 & 0.5\end{pmatrix} \begin{pmatrix} s \\ t \end{pmatrix} +\begin{pmatrix} 0 \\ 0\\ 2.5 \end{pmatrix}. \]

Notice here that I’ve now written the expression as a linear shift of the linear combination of two vectors that lie in the plane, or as the linear shift of a matrix product. While simply a cosmetic rewrite, this points out a larger lesson. Explore this through examples:

Look at

\[ \begin{pmatrix} x_1 & x_2 & x_3\end{pmatrix}\begin{pmatrix}3 & 2 \\ -1 & 3 \\ 2 & 4\end{pmatrix} \]

How can this product be expanded? Write the product in two ways, as a single row vector and as the linear combination of three row vectors in \( \mathbb{R}^2 \).

Do this again for another product:

\[ \begin{pmatrix}3 & 2 \\ -1 & 3 \\ 2 & 4\end{pmatrix} \begin{pmatrix} y_1 \\ y_2\end{pmatrix} . \]

Again, rewrite the product as a single column vector in \( \mathbb{R}^3 \) and as the linear combination of two column vectors in \( \mathbb{R}^3 \).

What do you notice about the linear combinations and vector spaces that result?

This leads to the useful concepts of row space and column space, explored a little later.

8.3.2 Projection

Let’s get back to lines for a bit. One common question that arises in physics, finance, and statistics is this: “I have a vector \( \vec{a} \) that I want to explore, and another vector \( \vec{b} \) as a reference of sorts. How much of vector \( \vec{a} \) points in the direction of \( \vec{b} \)? What does this even mean?”

Examples you may have seen include the classic physics question about a block on a slippery slope. That block is going to slide down the slope, but how fast? What’s the component in the direction of gravity, and what’s the horizontal component? We can use similar ideas closer to finance, in ideas of decomposing a company’s stock’s movement into the component due to “the market” and the component due to the company itself. (Insert discussion of how this relates to alpha, beta here.***) Later, we’ll employ Gram-Schmidt decomposition (Section sec:gram-schmidt) and singular value decomposition (Section 9.13) to pursue these ideas.

For the moment, though, let’s just look at projection. Let’s define the projection of a vector \( \vec{a} \) onto another vector \( \vec{b} \). Imagine that a projection is the shadow of \( \vec{a} \) on the line in direction \( \vec{b} \) if the sun is “directly overhead.”

images/Projection

The algebraic formulation is

\[ \textrm{proj}_{\vec{b}} \vec{a} = \frac{\vec{a}\cdot \vec{b} }{|\vec{b}|^2} \vec{b}. \]

Notice that this vector has a direction (it goes in the direction of \( \vec{b} \)) and a magnitude (the magnitude of the projection is \( |\vec{a} |\cos (\theta) \), where \( \theta \) is the angle between the vectors \( \vec{a} \) and \( \vec{b} \). You could figure out this definition of projection yourself by looking at the natural geometric expression \( |\vec{a}| \cos (\theta) \frac{\vec{b}}{|\vec{b}|} \) and using \( \vec{a} \cdot \vec{b} = |\vec{a}||\vec{b}| \cos (\theta) \) to prove the formula in terms of the dot product.

In particular, this helps us write \( \vec{a} \) as the sum of a component in the dirciton of \( \vec{b} \) and a component perpendicular to \( \vec{b} \). We can get the component perpendicular to \( \vec{b} \) by simply subtracting:

  • check that

    \[ \vec{c} = \vec{a} - \frac{\vec{a}\cdot \vec{b} }{|\vec{b}|^2} \vec{b} = \vec{a} - \textrm{proj}_{\vec{b}} \vec{a} \]

    is a vector orthogonal to \( \vec{b} \), and check that

  • check that \( textrm{proj}_{\vec{b}} \vec{a} + \vec{c} = \vec{a} \).

8.3.3 Determinants

One must reckon with determinants of square matrices at some point in linear algebra. It’s time. We will in general automate computation of determinants, so I will include here only the \( 2 \times 2 \) and a few comments on determinants and volume.

The determinant of the \( 2 \times 2 \) matrix

\[ A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \]

is \( \det A = ad-bc \). This can be positive or negative or zero.

  • Prove that if the determinant is zero, then the first column is a scalar multiple of the second column.
  • Demonstrate to yourself that if the determinant is negative, then the linear transformation \( T(\vec{x}) = A\vec{x} \) “switches the order of” the standard Euclidean basis vectors \( \vec{e}_1 \) and \( \vec{e}_2 \) (changing the orientation, as a reflection would)
  • Draw some of examples of transformations of the unit square via \( A \) and demonstrate to yourself that the absolute value of the determinant, \( |\det A| \), gives the area of the transformed unit square.

For larger \( n \times n \) matrices, we often use the Laplace expansion. The Laplace expansion is a specific method of breaking an \( n \times n \) determinant into \( n \) smaller \( (n-1) \times (n-1) \) determinants. It stems from a larger idea, that you could actually compute determinants by taking \( n! \) products of matrix entries and summing them with sign coming from number of inversions in the order of rows picked out: see picture ********

The Laplace expansion makes this systematic and hides much of the work:

  • Set the following notation: \( \hat{A}_{i,j} \) is the \( (n-1) \times (n-1) \) submatrix of the \( n\times n \) matrix \( A \) you get by dropping row \( i \) and column \( j \).
  • Pick the top row of \( A \) to expand along for this Laplace expansion. This is a choice; you could use any row or column.
  • Expanding along the top row,

    \[ \det A = \sum_{j = 1}^n (-1)^{j+1} a_{1,j} \det \hat{A}_{1,j}. \]

For large matrices, you’d not want to do this by hand, but as a recursive procedure it’s a relatively easy algorithm to implement in a computer. Of course, determinants are already implemented in almost any programming language you’d care to use at work, often with a number of optimizations that we don’t have time to cover in this class.

8.3.4 Cross products

The cross product of two vectors in \( \mathbb{R}^3 \) gives a vector that is perpendicular to both of the input vectors.

It’s is a funny product, as it’s only defined in \( \mathbb{R}^3 \). Contrast this with dot product – \( \vec{a} \cdot \vec{b} \) makes sense in any positive dimension – or in the next section, determinant, which makes sense for any square matrix. Why only \( \mathbb{R}^3 \) for the determinant?

Because of this odd constraint, the cross product is not that useful in finance, but it’s common enough in linear algebra examples that we should not skip it here. Here’s a definition:

\[ \begin{pmatrix} a_1 \\ a_2 \\ a_3 \end{pmatrix} \times \begin{pmatrix} b_1 \\ b_2 \\ b_3 \end{pmatrix} = \begin{pmatrix} a_2b_3 - a_3b_2 \\ -(a_1b_3-a_3b_1) \\ a_1b_2-a_2b_1 \end{pmatrix}. \]

Check a few easy properties of this formula:

  • \( \vec{a} \times \vec{b} = - \vec{b} \times \vec{a} \), so in particular this is not a commutative product! order matters!
  • \( \vec{a} \times c\vec{a} = \vec{0} \) for \( c \in \mathbb{R} \), as a special case

I find this formula almost intolerable to remember as I am generally allergic to formulas, so instead I use one of the following methods: the “cover up each row” method or the “determinant of three-by-three/stripes” method. Both methods rely on knowing how to compute the determ

Cover up each row:

8.4 Useful inequalities for vectors

It’s important to mention a few special inequalities that are useful for linear algebra and for finance. First, the triangle inequality. Back in Chapter 2, we encountered a version of the triangle inequality: two sides of a triangle have to sum to a number larger than or equal to the third side of the triangle. Now let’s use vectors to express the same idea. Look at a triangle with sides \( \vec{v} \), \( \vec{w} \), and \( \vec{v}+\vec{w} \), with all vectors in \( \mathbb{R}^n \). Then the triangle inequality is

\[ ||\vec{v}||+||\vec{w}||\geq ||\vec{v}+\vec{w}||. \]

Why is this discussed right after all the material on dot products for real vectors? Because you can rewrite those magnitudes as dot products and use the Cauchy-Schwarz inequality to prove the triangle inequality in an elegant way.

The Cauchy-Schwarz inequality is really fundamental to any inner product space – the dot product is an example of a more general inner product. Using the notation we’ve been using in this chapter, the Cauchy-Schwarz inequality says

\[ |\vec{v}\cdot\vec{w}| \leq ||\vec{v}||\;||\vec{w}||. \]

You can prove this using the geometric characterization of the dot product as \( \vec{v}\cdot\vec{w} = ||\vec{v}||\;||\vec{w}|| \cos \theta \), where \( \theta \) is the angle between \( \vec{v} \) and \( \vec{w} \).

Now I suggest you try using the Cauchy-Schwarz inequality to prove the triangle inequality. Use the fact that you can rewrite \( ||\vec{v} + \vec{w}||^2 \) as \( (\vec{v} + \vec{w})\cdot(\vec{v} + \vec{w}) \). Can you finish the proof?

8.5 Some vectors in finance

Linear algebra is used all over finance, and here I’ll introduce four vectors that are useful in our further applications of linear algebra. First, we can represent a portfolio of stocks (or other assets) with the vector \( \vec{x} = \begin{bmatrix} x_1 & \ldots & x_m\end{bmatrix} \). Interpreted, this means we have \( x_i \) shares of stock \( i \), for \( m \) stocks \( i=1, \ldots, m \). These numbers can be real numbers: a negative entry \( x_i \) would indicate holding a short position on the stock, and we can have non-integer entries via buying fractional shares or through investing in a mutual fund or exchange-traded fund.

Second, we could construct vectors that represent what happens to the price of these assets under a particular economic scenario. Say we’re looking at a possible change in regulations, or in energy prices, or just have a forecast of what all the asset prices will be in one week. Then we could represent what happens under this hypothetical future scenario using a vector

\[ \vec{s} = \begin{bmatrix} s_1 \\ s_2 \\ \vdots \\ s_m\end{bmatrix} \]

where \( s_i \) represents the change in price of a share of asset \( i \) under the scenario. (We could equally well write a vector \( \vec{s}' \) in which \( s'_i \) represents the price of a share of asset \( i \) under the scenario, rather than the change in price.)

Notice that both our portfolio vector \( \vec{x} \) and our single-scenario change-in-price vector \( \vec{s} \) (or single-scenario price vector \( \vec{s}' \)) both have \( m \) entries. This is because \( m \) assets are under consideration. Also notice that \( \vec{x} \vec{s} = \vec{x} \cdot \vec{s} \) gives the expected change in the value in the portfolio \( \vec{x} \) given the occurrence of the scenario under consideration, while \( \vec{x} \vec{s}' = \vec{x} \cdot \vec{s}' \) gives the overall value of the portfolio under the given scenario.

A third type of vector we could invent would look at the potential prices of a stock or asset \( A \) under various scenarios. For instance, what would happen to a pharmaceutical company’s stock price if (I’ll date myself) the Affordable Care Act is repealed? if President Trump decreases the Food and Drug Administration’s regulatory responsibilities and allows “fast-tracking” of new drugs”? if the rules for H1B visa applicants are changed substantially? if changing trade relations with India and China change the profile of the export market? Any of these changes could affect the stock price of a pharmaceutical company, and you might want to look at these scenarios using linear algebra as a first pass analysis. Say for stock \( A \) we consider \( n \) scenarios. Then a vector \( \vec{a} = \begin{bmatrix} a_1 & a_2 & \ldots & a_n \end{bmatrix} \) could represent the change in price of the asset \( A \) under each of the \( n \) scenarios, or a vector \( \vec{a}' = \begin{bmatrix} a'_1 & a'_2 & \ldots & a'_n \end{bmatrix} \) could represent the net price of the asset under each of the scenarios.

Fourth, an essential part of risk management and forecasting is working with the probabilities of these future scenarios. You may use “expert judgement” to come up with the probabilities of these future scenarios, or you may use the no-arbitrage principle to come up with probabilities based on the price changes forecast by your “expert judgement.” Either way, with \( n \) scenarios under consideration you’d want a vector

\[ \vec{p} = \begin{bmatrix} p_1 \\ p_2 \\ \vdots \\ p_n \end{bmatrix} \]

with each element \( p_i \) giving the probability of the \( i \)th scenario. Note that if you’re considering this situation where these \( n \) scenarios cover all future events, you must have the probabilities adding up to one by the axioms of probability:

\[ \sum_{i =1}^n p_i = 1. \]

This is the first time in our consideration of vectors in finance that we’ve had a condition like this, and it’s a little special – finally probability is starting to intersect with our multivariate work!

Again let’s look at a few matrix or dot products that are relevant. The product

\[ \vec{a} \vec{p} = \vec{a} \cdot \vec{p} = \nu_A \]

would give the expected value of the change in price of the asset \( A \) under the \( n \) scenarios under consideration. If you’re confident in your scenario analysis and the given probabilities and \( \nu_A \) was positive, you’d have an expected profit on your hands! Buy that asset! It’s not necessarily a guaranteed profit – under some scenarios you might still lose money – but it may be a “good bet” in investment terms. If you are looking at no-arbitrage pricing and probabilities, you’d expect to have

\[ \vec{a} \vec{p} = \vec{a} \cdot \vec{p} = \nu_A = 0, \]

with no expected profit. (The product \( \vec{a}'\vec{p} \) would just give the expected value, rather than change in value, of asset \( A \) over all the scenarios under consideration.)

Questions:

How would you represent a portfolio with 6 shares of stock \( A \), 5 of stock \( B \), and a short on \( 7 \) shares of stock \( C \)?

Can you find a probability vector \( \vec{p} \) that will give you an expected profit of zero given a scenario vector for stock \( A \) of \( \vec{a} = \begin{bmatrix} 1 & 3 & 0& 0.5\end{bmatrix} \)? What about for \( \vec{a} = \begin{bmatrix} 1 & 3 & 1& 0.5\end{bmatrix} \)? What problems do you encounter for the second one, and why?

8.6 Linear transformations

Consider a general \( m \times n \) matrix \( A \), with \( m \) rows and \( n \) columns. Figure out some way to remember that rows come first, columns second – easy for some to remember, but I’m a person who used to have trouble keeping left and right straight. My solution:

images/RCCola

Royal Crown Cola reminds me that rows come first, columns second!

Quiz yourself:

  • If we multiply the matrix \( A \) on the left by a row vector with         elements, then we get a row vector with \( n \) elements.
  • If we multiply \( A \) on the right by a column vector with \( n \) elements, then we get a column vector with         elements.

In this way, we can think of multiplication by \( A \) as a function that transforms row vectors \( \vec{x} \) in \( \mathbb{R}^m \) to row vectors \( \vec{x}A \) in \( \mathbb{R}^n \), or as a function that transforms column vectors \( \vec{y} \) in \( \mathbb{R}^n \) to column vectors \( A\vec{y} \) in \( \mathbb{R}^m \).

Example 6.1 Is multiplication by a matrix \( A \), on either the left or the right, a linear operation? (If necessary, remind yourself what linear means!)

We call these functions linear transformations, because they are nice general ways of transforming yourself from one linear space to another linear space.

Go back to our example of a row vector \( \begin{pmatrix} x & y & z\end{pmatrix} \) multiplied by a \( 3 \times 2 \) matrix

\[ A = \begin{pmatrix}3 & 2 \\ -1 & 3 \\ 2 & 4\end{pmatrix}. \]

We say that \( A \) represents a linear transformation

\[ R: \mathbb{R}^3 \rightarrow \mathbb{R}^2, \]

and we write the action as

\[ R(\vec{x} )= \vec{x} A. \]

With this notation it’s easy to see that multiplying by a matrix \( A \) is a linear operation: \( (b\vec{x} + c\vec{y})A = b\vec{x}A + c\vec{y}A \) for any \( b,c \in \mathbb{R} \) by properties of matrix multiplication, so \( R(b\vec{x}+c\vec{y}) = bR(\vec{x}) + cR(\vec{y}) \).

Example 6.2 What is the domain of the function \( \vec{x}A \) with

\[ A = \begin{pmatrix}3 & 2 \\ -1 & 3 \\ 2 & 4\end{pmatrix}? \]

What is the range, or image, of this function? The domain is \( \mathbb{R}^3 \) and range is... Well, in this case it’s all of \( \mathbb{R}^2 \) but that takes some work to prove. We need to develop some more machinery.

The concept of range is a bit sophisticated for linear transformations. We call the possible outputs of \( R \) the image of \( R \), and we notice it consists of all linear combinations of the rows of \( A \). The notation for this is \( \textrm{im}(R) \) or \( \textrm{im}(A^T) \). Here, \( A^T \) is the transpose and we write this because historically mathematicians are prejudiced in favor of multiplying with the matrix on the left and the vector on the right, and converting \( \vec{x}A \) to this format means taking \( A^T \vec{x}^T \). Reconcile this with the notation for column space below. The term for “all possible linear combinations” is span. In sentences, we can say, The image of \( R \) is the linear span of the rows of \( A \), or alternatively, The image of \( R \) is the row span of the matrix \( A \). If the rows of \( A \) are written as \( \vec{a}_1, \ldots, \vec{a}_m \), then we can write \( \textrm{span}(\vec{a}_1, \ldots, \vec{a}_m) \) for this row span.

Example 6.3 Repeat this analysis instead multiplying by a column vector on the right: the linear transformation

\[ L: \mathbb{R}^2 \rightarrow \mathbb{R}^3, \]

written as

\[ L(\vec{x} )= A\vec{x} . \]

Here you get the column span of the matrix \( A \) for the image of \( L \). If the columns of \( A \) are \( \vec{c}_1, \ldots, \vec{c}_n \), we can write \( \textrm{span}(\vec{c}_1, \ldots, \vec{c}_n) \) for the span of the columns, or we can write \( \textrm{im}(A) \) for the image of the linear transformation \( A\vec{y} \).

How do we know the dimension of the row span or the column span? We need to look at linear independence of rows and columns. This will give us the idea of the \( \textit{rank} \) of a matrix. We’ll define rank in section 8.7, but first, we’ll set up some financial concepts.

When we talked about the span of a set of vectors above, we were making a vector space by simply defining, for \( \vec{v}_i \in \mathbb{R}^m \),

\[ V = \textrm{span}(\vec{v}_1, \ldots, \vec{v}_n) = \{ \vec{x} \in \mathbb{R}^m | x = c_1\vec{v_1} + \cdots + c_n\vec{v}_n \;\; \forall \;\; c_i \in \mathbb{R}\}. \]

For instance, the linear span of the \( \vec{v}_i \in \mathbb{R}^m \) is \( V \subset \mathbb{R}^m \).

Example 6.4 Is the row span of a matrix a vector space? Is the column span of a matrix a vector space?

Let \( V \) be a vector subspace of \( \mathbb{R}^n \). The set of vectors orthogonal to every vector in \( V \) is also a vector subspace. We denote this space by \( V^{\perp} \) and call it the orthogonal complement of \( V \):

\[ V^{\perp} = \{ \vec{x} \in \mathbb{R}^n | \vec{x}\cdot \vec{v} = 0 \forall \vec{v} \in V\}. \]

You should prove to yourself that \( V^{\perp} \) is closed under scalar multiplication and vector addition.

Example 6.5 How does this show that the set of all solutions to the equation \( A \vec{y} = \vec{0} \) is a vector subspace of \( \mathbb{R}^n \)? This vector subspace is called the right null space of the matrix \( A \).

Example 6.6 How does this show that the set of all solutions to the equation \( \vec{x}A = \vec{0} \) is a vector subspace of \( \mathbb{R}^n \)? This vector subspace is called the left null space of the matrix \( A \).

We define the following vocabulary:

  • A function \( f: A \rightarrow B \) is one-to-one if whenever \( f(a_1) = f(a_2) \), we have \( a_1 = a_2 \). Example: \( f(x) = x^3 \), where \( f: \mathbb{R}^1 \rightarrow \mathbb{R}^1 \), is one-to-one. If \( f(x) = 1 \), you know \( x = 1 \) and there are no other choices. Non-example: \( f(x)=x^2 \) is not one-to-one, because if I tell you \( f(x) = x^2 = 1 \), you can reply, “Clearly \( x \) must equal either 1 or \( -1 \)! Which would you like?”
  • A function \( f: A\rightarrow B \) is onto if for all \( b \) in \( B \) there’s an \( a \) in \( A \) so that \( f(a) = b \). The function \( f \) maps on to its image, covering all the points in \( B \). Example: \( f(x) = x^3 \) again (why?). Non-example: \( f(x) = x^2 \) again (why?). More subtle non-example: \( f(x) = (x, x^3) \). Here \( f: \mathbb{R}^1 \rightarrow \mathbb{R}^2 \). Points like \( (1,1) \) are in the image of this function, but is \( (1,2) \)? No! So not all points of \( \mathbb{R}^2 \) are covered by this function; \( f \) is not onto. Be careful of domain and range here.
  • For a function \( f: A\rightarrow B \), take a point \( b \) in \( B \) that is in the image of \( f \). The \( \textbf{preimage} \) of this point \( b \) in \( B \) is the set of points in \( A \) that map to \( b \) under \( f \). For instance, for \( f(x) = x^2 \), the preimage of \( 1 \) is \( {1,-1} \), the set of all points whose square is \( 1 \). For a multivariable example, consider \( f(x,y) = x^2 +y^2 \). Here \( f:\mathbb{R}^2 \rightarrow \mathbb{R}^1 \). Take any point in \( \mathbb{R}^1 \) and try to “go backward”: the preimage of \( -1 \) is the empty set, the preimage of \( 0 \) is \( (0,0) \), and the preimage of \( 1 \) is the circle \( x^2+y^2=1 \) in \( \mathbb{R}^2 \).

8.7 Bases

If \( V \) is the linear span of a set of vectors \( \vec{v}_1, \ldots, \vec{v}_n \), we call these \( \vec{v}_i \) a spanning set for \( V \). A minimal spanning set for \( V \) is such a set with as few elements as possible: if you remove any vector from a minimal spanning set, the remaining vectors will no longer span \( V \). We call a minimal spanning set a basis for \( V \).

Theorem 7.1 Let \( \vec{v}_1, \ldots, \vec{v}_n \) be a basis of \( V \). Then the vectors \( \vec{v}_i \) are linearly independent: that is, no \( \vec{v}_i \) can be written as a linear combination of the remaining \( n-1 \) vectors. Equivalently, the only way to write the zero vector \( \vec{0} \) as a linear combination of the \( \vec{v}_i \),

\[ c_1\vec{v}_1+ \cdots+ c_n\vec{v}_n=\vec{0} \]

is to take all coefficients \( c_1 = c_2 = \ldots = c_n =0 \). In addition, every basis for \( V \) has exactly \( n \) vectors.

Example 7.1 Use contradiction to prove the first part: if we could write \( v_n \) (for instance) as a linear combination of the other vectors, then (what?)

Example 7.2 Challenge: prove that every basis for \( V \) must have the same number of vectors.

The dimension of a vector space is the number of vectors in a basis for the vector space.

We can easily work with the linear span of a set of vectors \( \vec{v}_1, \ldots, \vec{v}_m \) by writing the vectors as the rows of an \( m \times n \) matrix. Go through the following questions and give your best answers:

Example 7.3

  • Does swapping two rows of a matrix change the row space of the resulting matrix?
  • Does replacing row \( i \) of a matrix with row \( i \) minus row \( j \) change the row space of the matrix?
  • Does multiplying a row in a matrix by a real number change the row space of the resulting matrix?

The answer to all of the above is no, because each of these is a linear combination of row vectors, and the row space (the span of the row vectors) is closed under linear combinations (scalar multiplication and vector addition). This means that we can use row reduction techniques to solve matrix equations of the form \( A\vec{y}= \vec{b} \) on paper. The goal is to streamline old techniques for solving systems of equations by row reducing the augmented matrix \( [A|\vec{b}] \) to have \( A \) in row echelon form.

Row echelon form is a special form of a matrix: at the bottom of the matrix, rows with only zeroes; all other rows have first nonzero entry 1, which we call a “leading 1”; all entries above and below leading 1s are zero. Not only does this allow us to solve equations of the form \( A\vec{y} = \vec{b} \), but it allows us to easily see the rank of a matrix \( A \), and this form gives an easy way of determining the dimension of the right null space (just the number of rows of zeroes).

Example 7.4 Find the space of solutions to

\[ \begin{pmatrix} x_1 & x_2 &x_3 \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 2 & 1 \\ 3 & 4\end{pmatrix} = \begin{pmatrix}2& 1 \end{pmatrix}. \]

Hint: Using the symbol \( T \) for transpose (swapping rows and columns), we can change \( \vec{x} A = \vec{b} \) to \( A^T \vec{x}^T = \vec{b}^T \). This makes the equation easier to deal with using methods discussed in class.

(The solution space is one-dimensional – we’ve got three variables and two conditions on them, so one degree of freedom in solutions. Check that your work gives you something like \( x_1 = 5x_3 \), \( x_2= 1-4x_3 \), \( x_3 \) free. We can write this answer parametrically as

\[ \begin{pmatrix} x_1 & x_2& x_3 \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \end{pmatrix}+x_3 \begin{pmatrix} 5 & -4 &1 \end{pmatrix}. \]

Example 7.5 Show that

\[ \begin{pmatrix} 1& 1 \\ 2 & 1 \\ 3 & 4 \end{pmatrix}\begin{pmatrix}a \\ b\end{pmatrix} = \begin{pmatrix} 0 \\ 1\\ 4 \end{pmatrix} \]

has no solution.

(Find a contradiction.)

Example 7.6 Consider the following three investment opportunities, showing the net profit under four possible outcomes:

\[ \begin{pmatrix}-20, 20, 20, 20\end{pmatrix}, \begin{pmatrix}0, 20, -10, -10\end{pmatrix}, \begin{pmatrix}30, -40, -20, -30\end{pmatrix}. \]

Show that these do not offer the possibility of arbitrage, and determine the price of the following investment opportunity: \( \begin{pmatrix} 20, 0, 30, 0 \end{pmatrix} \). Hint: Use the theorem of no arbitrage! You can show there’s no possibility of arbitrage (“free money”) directly, by reducing the matrix and showing there’s no solution, or you can find a probability vector \( \vec{p} \) that satisfies case (2) of the No Arbitrage theorem. The existence of this probability vector shows that there can be no arbitrage!

The fundamental theorem of linear algebra relates the four “fundamental linear subspaces” of an \( m \times n \) matrix \( A \). These four subspaces are the right and left null spaces, the column space, and the row space. I’ll remind you that L: Rn Rm x Ax and R: Rm Rn y yA = AT yT .

Theorem 7.2 The dimension of the row space of \( A \) plus the dimension of right null space of \( A \) equals \( n \).

\[ \dim (\textrm{im}(A^T))+\dim(\textrm{null}(A) = n. \]

The dimension of the column space of \( A \) plus the dimension of left null space of \( A \) equals \( m \).

\[ \dim (\textrm{im}(A))+\dim(\textrm{null}(A^T) = m. \]

Theorem 7.3 The dimension of row space of \( A \) is the same as the dimension of column space of \( A \), and these are both equal to the rank of matrix \( A \).

Example 7.7 Challenge: think about how you could get a set of orthogonal basis vectors for a vector space from any basis provided. Pick a vector to start with, then use the projection formula discussed a few classes ago to find the next one. How do you get the third basis vector? Remember it must be orthogonal to both of the previous vectors! Explain how to proceed in the case of \( n \) vectors.

Example 7.8 Challenge two: can you modify this process to produce an orthonormal basis? That means all basis vectors are orthogonal to each other, and moreover each is a unit vector.

This will be discussed more in the next chapter.

8.8 Applications to financial math

The principle of no arbitrage in finance has many different phrasings. One that I like is that if two assets have the same risk and the same cash flow, they’ve got to sell at the same price. Otherwise you could make money from exploiting the difference in price between the two. Another way people express the no arbitrage principle is to say that you can’t make more money than the market without taking on more risk, and yet another is that there is “no free lunch.”

Now, is the no arbitrage principle true? Well, not exactly. The “efficient market hypothesis” says that if there’s an arbitrage opportunity (the chance to make money without risk) then the market will notice, people will take advantage of it, and prices will then adjust to eliminate that opportunity. This means that in an efficient market, prices reflect information accurately. (Look up strong, semi-strong, and weak efficiency elsewhere!) The capital asset pricing model (CAPM) and Black-Scholes options pricing model are both built on the principle of no arbitrage, and that’s why we need to understand it. Moreover, CAPM and Black-Scholes are really useful out in the real world. But like all models, they’re wrong, as is a really strict no-arbitrage statement. Robert Shiller, for instance, looked at changes in dividend prices and their effect on share prices of assets.He found that share prices “overreact.” Look up the work of Fama, Schiller, and Hansen, who jointly won the Nobel Prize in Economics in 2013 for their (separate) work on asset pricing. You’ll find that the truth about asset prices is a lot more complicated than no-arbitrage – but you can’t understand what is really going on without knowing this basic principle.

We will formulate a “No Arbitrage” theorem via linear algebra. First, we assume that we can form an \( m \times n \) matrix \( S \) of net profits, describing what will happen to \( m \) stocks under \( n \) outcomes or scenarios. (Entry \( S_{ij} \) is the net profit for stock \( i \) under scenario \( j \).)

Example 8.1 What are the possible net profit vectors for all the portfolios I could create? Test yourself by translating this into the language of linear algebra. What vector space am I asking for?

Consider the situation given by

\[ \begin{pmatrix} x_1 & x_2 & x_3\end{pmatrix} \left(\begin{array}{rrrr} -2 & -1 & 0 & 1 \\ 3 & 2 & -1 & -1 \\ 1 & 0 & 6 & -5 \end{array}\right) \]

We have three stocks and four scenarios under examination.

Example 8.2 Can we invest in a way that produces the vector \( \begin{pmatrix}0& 0& 0& 5 \end{pmatrix} \)? How do you figure out the answer to this? How do you interpret the answer once you have it?

Example 8.3 Can we invest in a way that produces the vector \( \left(4,2,12,-9\right) \)?

Example 8.4 Let’s add another investment opportunity, the “savings account.” This opportunity is an idealized situation in which you put in a dollar and get a dollar. This of course ignores interest rates for the moment. How do you add this information to the matrix?

Example 8.5 Given the new matrix, can you invest in a way that produces the vector \( \left(4,2,12,-9\right) \)?

Example 8.6 What is the cost of such a portfolio? Interpret this carefully, looking at net profit vectors and the savings vector and considering the cost of each.

We can find the cost of such a portfolio by solving for a probability vector that satisfies

\[ S \vec{p} = \begin{pmatrix}0&0&0&1 \end{pmatrix}^T \]

and then taking the dot product of our portfolio vector \( \vec{x} \) with this probability vector \( \vec{p} \).

By the axioms of probability, a probability vector has only non-negative entries and its entries sum to one. In particular, if the vector \( \vec{p} \) must have non-positive elements to satisfy the matrix equation given above, we can create a portfolio with no negative elements and at least one positive element which costs no money – and that’s arbitrage.

Let’s go back to vector spaces so that we can gather some techniques for more efficiently solving the matrix equations above. With new language, we can also restate the “no arbitrage” theorem in terms of linear algebra and provide some strategies for proof.

Using the new language of linear algebra, we can restate the no-arbitrage theorem as the following: EITHER

  • the row space of \( S \) contains a non-negative vector with at least one positive element, OR
  • the orthogonal complement of the row space of \( S \) (the right null space) contains a vector whose elements are all strictly positive.

I use incorrect capitalization here to emphasize that these outcomes are mutually exclusive. To test your understanding, ask yourself: which of these cases is the case with arbitrage? How do you know? In the no-arbitrage scenario, what’s true about the probabilities of the scenarios?

Initially we considered the situation with \( n \) scenarios and \( m \) stocks, so

  • each portfolio \( \vec{x} \) has \( m \) entries,
  • the matrix of prices \( S \) is an \( m \times n \) matrix,
  • and the probability vector \( \vec{p} \) has \( n \) entries.

Figuring out which case of the no-arbitrage theorem holds involves either finding \( \vec{x}S \) with some strictly positive entry (making some money!) and the corresponding portfolio \( \vec{x} \) that guarantees us this risk-free money, or finding the probability vector \( \vec{p} \) that gives \( S\vec{p} = \vec{0} \).

To consider cost or the “savings account” approach, add a row of ones to the bottom of \( S \). Call the new matrix \( S' \). Add an entry \( x_{m+1} \) to the end of the portfolio vector \( \vec{x} \), representing how much money you’re putting in the savings account, and call the resulting vector \( \vec{x}' \). If the “savings account” interpretation is distateful to you (and it does have some drawbacks), consider this a mathematical way of requiring that \( \vec{p} \) be a probability vector: that is, \( p_1 + \cdots + p_n = 1 \). The reason you might not like the “savings account” interpretation is that it does not quite line up with the idea that the matrix entries of \( S \) are net change in stock price, or profits. The row of ones does not represent absolute profit (making $3 per share of stock \( i \)) but instead leaves the amount \( x_{m+1} \) unchanged in each scenario.

Example 8.7 Prove to yourself that if we have a probability vector \( \vec{p} \) so that \( S\vec{p} = \vec{0} \), then \( \vec{x}' S' \vec{p} = x_{m+1} \).

Example 8.8 Show that if we have \( \vec{p} \) a probability vector with only positive entries, then we must have negative entries in the portfolio vector \( \vec{x} \). (This is one part of the proof of the no arbitrage theorem.)

Example 8.9 Challenge: prove that if we have a vector \( \vec{x} \) whose entries are all non-negative, and at least one of whose entries is positive, so that \( \vec{x} S \) is non-negative and has at least one strictly positive entry, then there can be no strictly positive probability vector \( \vec{p} \) so that \( S \vec{p} = \vec{0} \).

8.9 Invertible transformations

This section mainly emphasizes ideas of rank and determinant, and reinforces what we’ve learned about transformations. All matrices in this section are square matrices.

Theorem 9.1 If the rank of an \( n \times n \) matrix \( A \) is \( n \), then the linear transformation \( L: \mathbb{R}^n \rightarrow \mathbb{R}^n \) defined by \( L(\vec{x}) = A \vec{x} \) is both one-to-one and onto. If the rank of \( A \) is less than \( n \), then the linear transformation \( L \) is neither one-to-one or onto. Thus the linear transformation \( L \) is invertible if and only if the matrix \( A \) has rank \( n \).

A square matrix is of full rank if and only if its determinant nonzero; a determinant of zero means the matrix is degenerate (the matrix gives a transformation that is not onto – the image is contained in a smaller linear subspace of the target space).

In much of linear algebra before this class, you’ve probably concentrated on full rank matrices, which give invertible transformations. That is because they give systems of equations easy to solve with matrix methods: if \( A \) is full rank and \( A \vec{x} = \vec{y} \), then \( \vec{x}=A^{-1} \vec{y} \). This is extraordinarily important, and also not that interesting!

Remember from earlier that the determinant of a matrix \( A \) gives the change in volume that the transformation induced by the transformation \( L(\vec{x}) = A\vec{x} \). Maybe some illustrations will help: in two dimensions,

images/TwoDTransformationArea

and in three dimensions

images/ThreeDTransformation

This holds for transformations \( \mathbb{R}^n \rightarrow \mathbb{R}^n \), in fact. Think of it this way: take the unit hypercube \( [0,1]^n \) and look at its image under the transformation \( L \). The image of the unit hypercube under \( L \) will have \( n \)-dimensional volume \( \det A \). This also says something about transformations that are not invertible: if a square matrix is not invertible, then the determinant is zero – the transformation is not onto and one-to-one. That means that the image of the unit hypercube has zero \( n \)-volume, which means the unit hypercube in \( \mathbb{R}^n \) was squashed into a lower-dimensional subspace by the transformation \( L \). Remember this when we start talking about principal component analysis, dimension reduction techniques, and singular value decomposition!!

Sours: https://www.softcover.io/read/bf34ea25/math_for_finance/lin_alg
Financial Algebra #019 Quiz 1 1 Review Examples

Kat, - he sighed, - Listen. Do not worry so much. You are a good specialist and you will easily find a job. I will give you recommendations. By the way, - he took a velvet black box from the table, went up to the girl and handed her the gift that Christina had rejected, - Here.

Algebra examples financial

Valkin's voice trailed off, and did not speak in a husky whisper, but he yelled somewhere in a blizzard, - and I see Murkin's breasts like this:. I see them all the way to the pink, fucking, rims: Ah - hey, wan. Well, here I am, fucking - ah, I got hot. I grab the shoulders in front of everyone: Come, - I say to her, - to me.

Solving Financial Problems

He said that he agreed to everything. I remembered watching puck and kinky videos recently. for example, Japanese, where a man jerked off on a bench in the park, hiding behind a newspaper, and then ran.

Similar news:

Never before, this procedure was not associated with something erotic, but now, I immediately felt the blood rush into my groin area, and I. Could not do anything with myself, somehow distracted, the extravagance and intimacy of the situation was higher my strength. While I was struggling with my unwanted erection, the owner of the office went to the cart and pointedly began to put a rubber glove on her beautiful right handle.

Turning in my direction, she noticed my confusion and, smiling, hurried me: - Don't worry, come here - she pointed with her eyes to the couch behind the.



4159 4160 4161 4162 4163