- This article is a continuation of Elementary mathematics

Believe it or not the basis of all of mathematics is nothing more than the simple ^{*}"Next" function.

- Next(0)=1
- Next(1)=2
- Next(2)=3
- Next(3)=4

This defines the Natural numbers^{w}. Natural numbers are those used for counting.

- These have the very convenient property of being transitive
^{w}. That means that if a<b and b<c then it follows that a<c. Not everything has that property. See Rock–paper–scissors and^{*}Nontransitive dice.

## Numbers

Addition^{w} is defined as repeatedly calling the Next function. Next(Next(Next(5))) = 3+5. The inverse of addition is subtraction^{w}. But subtraction leads to the ability to write equations like $ 1-3=x $ for which there is no answer among natural numbers. To provide an answer mathematicians generalize to the set of all integers^{w} which includes negative integers.

- The absolute value of
`x`is defined as $ |x| = \left\{ \begin{array}{rl} x, & \text{if } x \geq 0 \\ -x, & \text{if } x < 0. \end{array}\right. $

- The study of integers is called Number theory
^{w}.

- A prime number is a number that can only be divided by itself and one. If a, b, c, and d are primes then the Least common multiple
^{w}of abc and c^{2}d is abc^{2}d. (See Tutorial:least common multiples)

- A prime number is a number that can only be divided by itself and one. If a, b, c, and d are primes then the Least common multiple

Multiplication^{w} is defined as repeated addition. $ 8+8+8 = 3 \cdot 8 $. The inverse of multiplication is division^{w}. But division leads to equations like $ 3/2=x $ for which there is no answer among integers. The solution is to generalize to the set of rational numbers^{w} which include fractions.

- (Addition and multiplication are fast but division is slow
^{*}even for computers.)

Exponentiation^{w} is defined as repeated multiplication. $ 7 \cdot 7 \cdot 7 = 7^3 $. The inverse of exponentiation is finding roots^{w}.

- It can be proven that $ \sqrt{2} $ cannot be a rational number. It is therefore an irrational number. Any number which isnt rational is irrational
^{w}.

- 0^0 = 1. See Empty product
^{w}.

When a quantity, like the charge of a single electron, becomes so small that it is *insignificant* we, quite justifiably, treat it as though it were zero. A quantity that can be treated as though it were zero, even though it **very definitely is not**, is called infinitesimal. If $ q $ is a finite $ ( q \cdot 1 ) $ amount of charge then $ dq $ would be an infinitesimal $ ( q \cdot 1/\infty ) $ amount of charge. See Differential^{w}

Likewise when a quantity becomes so large that a regular finite quantity becomes insignificant then we call it infinite. We would say that the mass of the ocean is infinite $ ( M \cdot \infty ) $. But compared to the mass of the Milky Way galaxy our ocean is insignificant. So we would say the mass of the Galaxy is doubly infinite $ ( M \cdot \infty^2 ) $.

Infinity and the infinitesimal are called Hyperreal numbers^{w}. Hyperreals behave, in every way, exactly like real numbers. For example, $ 2 \cdot \infty $ is exactly twice as big as $ \infty. $ In reality, the mass of the ocean **is** a real number so it is hardly surprising that it behaves like one.

## Vectors

The one dimensional number line can be generalized to a two dimensional Cartesian coordinate system^{w} thereby creating multidimensional math (i.e. geometry^{w}). A single number specifies a single point on the number line. A single vector specifies a single point in this two dimensional Cartesian plane but this requires two numbers.

- If $ {\mathbf u} $ and $ {\mathbf v} $ are arbitrary vectors then we can (and usually do) write:

*$ \mathbf{u} = \begin{bmatrix} u_1 & u_2 \end{bmatrix} $*

*$ \mathbf{v} = \begin{bmatrix} v_1 & v_2 \end{bmatrix} $*

Vectors can be added

- $ \mathbf{u} + \mathbf{v} = \begin{bmatrix} u_1+v_1 & u_2+v_2 \end{bmatrix} $

Vectors can be multiplied with numbers

- $ 2 \mathbf{v} = \begin{bmatrix} 2v_1 & 2v_2 \end{bmatrix} $

The length of vector $ \mathbf{v} $ is denoted $ \|\mathbf{v}\|. $ The double bars are used to avoid confusion with the absolute value of the function.

- $ \|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2} $

### Dot product

Dot product^{w}: $ \mathbf{u} \cdot \mathbf{v} = \| \mathbf{u} \|\ \| \mathbf{v}\| \cos(\theta) = u_1 v_1 + u_2 v_2 $

- $ \mathbf{u}\cdot\mathbf{v} = \begin{bmatrix}u_1 \\ u_2 \end{bmatrix} \begin{bmatrix}v_1 & v_2 \end{bmatrix} = u_1 v_1 + u_2 v_2 $

- Only parallel components multiply. The result is a number not a vector.

- The dot product works in any number of dimensions.

- If vectors $ \mathbf{u} $ and $ \mathbf{v} $ form a 90 degree angle then $ \mathbf{u \cdot v} = 0 $ because $ \cos(90)=0 $

- In
^{*}Euclidean space $ \|\mathbf{v}\|^2 = \mathbf{v}\cdot\mathbf{v}. $

### Cross product

Cross product^{w}: $ \mathbf u \times \mathbf v= \| \mathbf u \| \|\mathbf v \| \sin(\theta) \, \mathbf n $

- The result is a vector that is perpendicular to both $ \mathbf u $ and $ \mathbf v $. This vector can be thought of as the axis of rotation created when rotating from
**u**to**v**.

- Unlike the dot product, it is only defined in three dimensions.
- In two dimensions there is no vector perpendicular to $ \mathbf u $ and $ \mathbf v $.
- In three dimensions there is only one vector perpendicular to $ \mathbf u $ and $ \mathbf v $.
- In four or more dimensions there are infinitely many vectors perpendicular to $ \mathbf u $ and $ \mathbf v $.

Rotation from z to x. The axis of rotation is y.

## Functions

From Wikipedia:Function (mathematics)

In mathematics, a **function** is a relation between a set of inputs and a set of outputs with the property that each input is related to exactly one output. An example is the function $ f(x)=x^2 $ that relates each number *x* to its square *x*^{2}. The output of a function *f* corresponding to an input *x* is denoted by *f*(*x*) (read "*f* of *x*"). In this example, if the input is −3, then the output is 9, and we may write *f*(−3) = 9. See Tutorial:Evaluate by Substitution. Likewise, if the input is 3, then the output is also 9, and we may write *f*(3) = 9. (The same output may be produced by more than one input, but each input gives only one output.) The input variable(s) are sometimes referred to as the argument(s) of the function.

### Euclids "common notions"

From Wikipedia:Euclidean geometry:

Things that do not differ from one another are equal to one another

a=a |

Things that are equal to the same thing are also equal to one another

If |
| then a=c |

If equals are added to equals, then the wholes are equal

If |
| then a+c=b+d |

If equals are subtracted from equals, then the remainders are equal

If |
| then a-c=b-d |

The whole is greater than the part.

If | b≠0 | then a+b>a |

### Elementary algebra

From Wikipedia:Elementary algebra:

Elementary algebra builds on and extends arithmetic by introducing letters called variables to represent general (non-specified) numbers.

Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition^{w}, subtraction^{w}, multiplication^{w}, division^{w} and exponentiation^{w}). For example,

- Added terms are simplified using coefficients. For example, $ x + x + x $ can be simplified as $ 3x $ (where 3 is a numerical coefficient).

- Multiplied terms are simplified using exponents. For example, $ x \times x \times x $ is represented as $ x^3 $

- Like terms are added together,
^{[2]}for example, $ 2x^2 + 3ab - x^2 + ab $ is written as $ x^2 + 4ab $, because the terms containing $ x^2 $ are added together, and, the terms containing $ ab $ are added together.

- Brackets can be "multiplied out", using the distributive property
^{w}. For example, $ x (2x + 3) $ can be written as $ (x \times 2x) + (x \times 3) $ which can be written as $ 2x^2 + 3x $

- Expressions can be factored. For example, $ 6x^5 + 3x^2 $, by dividing both terms by $ 3x^2 $ can be written as $ 3x^2 (2x^3 + 1) $

For any function $ f $, if $ a=b $ then the following four rules apply:

- $ f(a) = f(b) $
- $ a + c = b + c $
- $ ac = bc $
- $ a^c = b^c $

A typical algebra problem would be to solve for x:

- $ 5x - 3x - 7 = 4y + 3 $

- Using rule number 2:

- $ 5x - 3x - 7 \, {\color{red} + \, 7} = 4y + 3 \, {\color{red} + \, 7} $

- we get

- $ 5x - 3x = 4y + 10 $

- Factoring out x

- $ (5 - 3)x = 4y + 10 $

- we get

- $ 2x = 4y + 10 $

- Using rule number 3

- $ 2x \, {\color{red} \cdot \frac{1}{2}} = (4y + 10) \, {\color{red} \cdot \frac{1}{2}} $

- we get

- $ x = 2y + 5 $

### Trigonometry

A right triangle is a triangle with gamma=90 degrees.

The **Pythagorean theorem** posits that in any right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of both legs.

- $ c^2=a^2+b^2 $ , where $ c $ is the length of the side opposite the right angle (the hypotenuse).

For small values of x, sin x ≈ x. (If x is in radians).

SOH → sin = "opposite" / "hypotenuse" |
= sin A = a/c |

A sphere rotating around its axis of rotation:

### Polynomials

- From Wikipedia:Polynomial:

A polynomial^{w} can always be written in the form

- $ polynomial = Z(x) = a_0 + a_1 x + a_2 x^2 + \dotsb + a_{n-1}x^{n-1} + a_n x^n $

where $ a_0, \ldots, a_n $ are constants called coefficients and *n* is the degree^{w} of the polynomial.

- A
^{*}linear polynomial is a polynomial of degree one.

Each individual ^{*}term is the product of the ^{*}coefficient and a variable raised to a nonnegative integer power.

- A
^{*}monomial has only one term.

- A
^{*}binomial has 2 terms.

A root (or zero) of a function is a value of x for which Z(x)=0.

- $ Z(x) = a_n(x - z_1)(x - z_2)\dotsb(x - z_n) $

- The roots of the formula $ ax^2+bx+c=0 $ are given by the Quadratic formula
^{w}:

$ x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}. $ See Completing the square ^{w}

- $ b^2 - 4ac $ is called the discriminant.

$ (x+y)^n = {n \choose 0}x^n y^0 + {n \choose 1}x^{n-1}y^1 + {n \choose 2}x^{n-2}y^2 + \cdots + {n \choose n-1}x^1 y^{n-1} + {n \choose n}x^0 y^n, $

- Where $ \binom{n}{k} = \frac{n!}{k! (n-k)!}. $ See Binomial coefficient
^{w}

$ x^2 - y^2 = (x + y)(x - y) $

## Elementary calculus

### Integration

- See also: Hyperreal number
^{w}and Implicit differentiation^{w}

The **integral ^{w}** is a generalization of multiplication. (Mathematicians do a lot of generalizing)

- For example: a unit mass dropped from point x
_{2}to point x_{1}will release energy.

- (A unit mass is a mass of one unit).

- The usual equation is a simple multiplication. We just multiply the force of gravity times the distance that the object falls and the result is how much energy is released:

- $ gravity \cdot (x_2 - x_1) = energy $

- This equation says that one unit of mass falling one unit of distance through a region with one unit of gravity will gain one unit of kinetic energy.

- But that equation cant be used if the strength of gravity is itself a function of x.

- The strength of gravity at x
_{1}would be different than it is at x_{2}.

- And in reality gravity really does depend on x (x is the distance from the center of the earth):

- $ gravity(x) = 1/x^2 $ (See inverse-square law
^{w}.)

- $ gravity(x) = 1/x^2 $ (See inverse-square law

- However, the corresponding Definite integral
^{w}is easily solved:

- $ \int_{x_1}^{x_2} gravity(x) \cdot dx $

The fundamental theorem of Calculus is:

- $ \int_{x_1}^{x_2} f(x) \cdot dx \quad = \quad F(x_2)-F(x_1) $

- where F(x) is the indefinite integral
^{w}. (antiderivative^{w})

- $ \int f(x) \cdot dx = F(x) $

Finding the indefinite integral is easy:

- $ \int k \cdot x^y \cdot dx \quad = \quad k \cdot \int x^y \cdot dx \quad = \quad k \cdot \frac{x^{y+1}}{y+1} $

- where
*k*and*y*are arbitrary constants. (Units (feet, mm...) behave exactly like constants.)

- and most conveniently:

- $ \int \bigg (f(x) + g(x) \bigg) \cdot dx = \int f(x) \cdot dx + \int g(x) \cdot dx $

The integral of a function is equal to the area under the curve.

- When the "curve" is a constant (in other words, k•x
^{0}) then the integral reduces to ordinary multiplication.

Examples:

- $ \begin{align} \int x^2 \quad &= \quad \phantom{3 \cdot} \frac{x^3}{3} \\ \int 3x^4 \quad &= \quad 3 \cdot \frac{x^5}{5} \\ \int \frac{x^5}{2} \quad &= \quad \frac{1}{2} \cdot \frac{x^6}{6} \end{align} $

- and

- $ \int (x^2 + 3x^4 + \frac{x^5}{2}) \quad = \quad \frac{x^3}{3} \quad + \quad 3 \cdot \frac{x^5}{5} \quad + \quad \frac{1}{2} \cdot \frac{x^6}{6} $

### Finite difference

From Wikipedia:Finite difference:

The slope of a function at point x is approximately:

- $ \frac{\Delta y}{\Delta x} = \frac{f(x+\Delta x)-f(x)}{\Delta x} $

- where

- Δx is a small change in the value of x.

- Δy is the corresponding change in y.

The smaller Δx becomes the more accurate the approximation becomes. When Δx becomes so small that it is infinitesimal then we denote it dx.

### Differentiation

Differentiation is the opposite of integration just as division is the opposite of multiplication.

- The derivative
^{w}of the integral of f(x) is just f(x).

- The derivative

The derivative of a function at any point is equal to the slope of the function at that point.

- $ f'(x)=\frac{dy}{dx} = \frac{f(x+dx)-f(x)}{dx}. $

The equation of the line tangent to a function at point a is

- $ y(x) = f(a) + f'(a)(x-a) $

The derivative of f(x) where f(x) = k•x^{y} is

- $ f'(x) = {df \over dx} = {d(k \cdot x^y) \over dx} \quad = \quad k \cdot {d(x^y) \over dx} \quad = \quad k \cdot y \cdot x^{y-1} $

- However there is one exception that you do need to know about.

- The derivative of $ k \cdot x^0 $ is $ k \cdot 0 \cdot x^{-1} = 0 $

- If the derivative of x
^{0}is not x^{-1}then what is the integral of x^{-1}?

- The integral of $ x^{-1} $ is ln(x)
^{[3]}. See natural log^{w}

- The integral of $ x^{-1} $ is ln(x)

And most conveniently:

- $ (f(x) + g(x))' = f'(x) + g'(x) $

Examples:

- $ \frac{d(e^x)}{dx} = e^x $

- $ \frac{d(sin(x))}{dx} = cos(x) $

- $ \frac{d(cos(x))}{dx} = -sin(x) $

- and

- $ \frac{d}{dx} (e^x + sin(x) + cos(x)) = e^x + cos(x) -sin(x) $

### Taylor & Maclaurin series

$ n $ factorial is:

- $ n! = 1 \cdot 2 \cdot 3 \cdots n $

- For example:

- $ 5!=1 \cdot 2 \cdot 3 \cdot 4 \cdot 5 = 120 $

- And

- $ 0! = 1 $

f' = first derivative of f

f" = second derivative of f

f^{(n)} = nth derivative of f

If we know the value of a smooth function^{w} at x=0 (smooth means all its derivatives are continuous^{w}) and we also know the value of all of its derivatives at x=0 then we can determine the value at any other point *x* by using the Maclaurin series^{w}.

- $ a_0 x^0 + a_1 x^1 + a_2 x^2 + a_3 x^3 \cdots \quad \text{where} \quad a_n = {f^{(n)}(0) \over n!} $

The proof of this is actually quite simple. Plugging in a value of *x=0* causes all terms but the first to become zero. So, assuming that such a function exists, *a _{0}* must be the value of the function at

*x=0*. Simply differentiate both sides of the equation and repeat for the next term. And so on.

We can easily determine the Maclaurin series expansion of the exponential function^{w} $ e^x $ (because it is equal to its own derivative).

- $ e^x = \sum_{n = 0}^{\infty} {x^n \over n!} = {x^0 \over 0!} + {x^1 \over 1!} + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots $

And cos(x)^{w} and sin(x)^{w} (because cosine is the derivative of sine which is the derivative of -cosine)

- $ \cos x = \frac{x^0}{0!} - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots $

- $ \sin x = \frac{x^1}{1!} - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots $

### Fourier Series

The Maclaurin series cant be used for a discontinuous function like a square wave because it is not differentiable.

But remarkably we can use the Fourier series^{w} to expand it or any other periodic function^{w} into an infinite sum of sine waves each of which is fully differentiable^{w}!

## Partial derivatives

**Partial derivatives ^{w}** and

**multiple integrals**generalize derivatives and integrals to multiple dimensions (i.e. multiple variables).

^{w}The partial derivative with respect to one variable $ \frac{\part f(x,y)}{\part x} $ is found by simply treating all other variables as though they were constants.

- $ \frac{\part (3x^2y)}{\part x} = 3y \frac{\part x^2}{\part x} = 3y \cdot 2x = 6xy $

Multiple integrals are found the same way.

### Gradient

Numbers are called scalars to distinguish them from vectors. A scalar function f(x) outputs a scalar number for each input value of x.

Let f(x, y) be a 2 dimensional scalar function^{w}.

- (An elevation map would be an example of a 2 dimensional scalar function because it assigns a scalar number (the height) to each point on the 2 dimensional map.)

The **Gradient ^{w}** of a scalar function is a vector that points "downhill" with a magnitude equal to the slope

^{w}of the function at that point. The gradient of a scalar function always goes downhill and therefore never goes in circles.

- $ \operatorname{grad}(f) = \nabla f = \frac{\partial f}{\partial x} \mathbf{i} + \frac{\partial f}{\partial y} \mathbf{j} = F_x \mathbf{i} + F_y \mathbf{j} = \mathbf{F} $

The function f is a scalar function. But the gradient of f is **not** a scalar function. $ \mathbf{F} $ is a vector field. That is why it is written in bold text.

A vector field for the movement of air at the surface of the Earth would associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind; the length (magnitude) of the arrow will be an indication of the wind speed.^{[4]}

There are places on the Earth where air rises from the surface all the way to the top of the atmosphere (thunderstorms). On our map air would seem to flow toward these points and then disappear (since it is no longer at the surface). We call these places "sinks". The opposite of a sink is a source. On our map a source would be a place where air is descending to the surface.

An even better way to represent vector fields than using short arrows is by using flux lines^{w}. (The word flux means flow.) Flux lines are unbroken lines that extend from sources all the way to the sinks (or to infinity if there are no sinks). A single flux line traces the path that a single particle would travel from a source to a sink. The intensity (for example wind speed) is indicated by the density of the flux lines. The more the lines are crowded together the greater the intensity (wind speed). Flux lines have a tendency to repel one another.

### Divergence

The **Divergence ^{w}** of the vector field is a scalar that is positive at sources and negative at sinks and zero everywhere else.

- $ \operatorname{div}\,\mathbf{F} = \nabla\cdot\mathbf{F} = \frac{\partial F_x}{\partial x} +\frac{\partial F_y}{\partial y}. $

Electric field lines begin at ^{*}positive charges and end at ^{*}negative charges. (The electric field is the gradient of the electric potential.)

### Curl

The **Curl ^{w}** of a vector field describes how much the flux lines are twisted. The curl is only defined in 3 dimensions.

The curl of the gradient of a scalar function is always zero but that is not true for the curl of all vector fields. Not all vector fields are the gradient of a scalar function. The flux lines of some vector fields even go in circles.

- $ \text{curl} (\mathbf{F}) = \nabla \times \mathbf{F} = \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right) \mathbf{i} + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x}\right) \mathbf{j} + \left(\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y}\right) \mathbf{k} $

The flux lines of a magnetic field always go in circles. Magnetic flux lines never end. There is no such thing as ^{*}magnetic charge.

### Green's theorem

You can think of each electric field line as beginning (and ending) in a single unit of charge. The more lines there are the more charge there is. Twice as many lines means twice as much charge.

Green's theorem^{w} states that if you want to know how many field lines exit a region then you can either count how many lines cross the boundary of that region (perform a line integral^{w}) or you can simply count the number of charges within that region. See Divergence theorem^{w}.

A version of Green's theorem also works for curl.

Green's theorem is an extremely important result that is widely used in more advanced mathematics. Green's theorem might seem like a trivial result that is so obvious that it isnt even worth stating but in more advanced mathematics it is used in places and in ways that are far from obvious.

Introductory mathematics/Elementary physics

## Advanced topics

### Complex numbers

The imaginary unit `i` is

- $ i = \sqrt{-1} $

- therefore

- $ i^2 = -1 $

Because no real number satisfies this equation, i is called an imaginary number.

From Wikipedia:Imaginary number and Wikipedia:Complex number

An **imaginary number** is a real number multiplied by the imaginary unit `i`. The square of an imaginary number `bi` is −*b*^{2}. For example, 5*i* is an imaginary number, and its square is −25.

An imaginary number *bi* can be added to a real number `a` to form a complex number of the form *a* + *bi*, where the real numbers `a` and `b` are called, respectively, the *real part* and the *imaginary part* of the complex number.

Geometrically, complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number *a* + *bi* can be identified with the point (*a*, *b*) in the complex plane.

Two complex numbers $ x $ and $ y $ are easily added:

- $ x + y =(a+bi) + (c+di) = (a+c) + (b+d)i. $

Similarly, subtraction can be performed as

- $ x - y =(a+bi) - (c+di) = (a-c) + (b-d)i. $

They can also be multiplied:

- $ \begin{align} x\cdot y &= (a+bi) \cdot (c+di) \\ &=a(c+di) + bi(c+di) \\ &=ac + adi + bic + bidi \\ &=ac + bidi + adi + bic \\ &=ac + bdi^2 + adi + bci \\ &=(ac + bdi^2) + (adi + bci) \\ &=(ac - bd) + (adi + bci) \\ &=(ac - bd) + (ad + bc)i \end{align} $

To divide by a complex number just multiply by the reciprocal of the complex number. The reciprocal of a complex number is:

- $ \frac{1}{a+bi}=\frac{1}{a+bi} \cdot \frac{a-bi}{a-bi} = \frac{a-bi}{a^2+b^2} = \frac{a}{a^2+b^2} - \frac{b}{a^2+b^2}i, $

### Constant of integration

The derivative of the integral of f(x) is indeed just f(x) but the opposite is not always the case.

- $ \frac{d}{dx} (x^2 + {\color{red}3}) = \frac{d}{dx} (x^2) + \frac{d}{dx} (3) = 2x + 0 = 2x $

- but

- $ \int \frac{d}{dx} (x^2 + {\color{red}3}) \, dx = \int 2x = x^2 $

what happened to the 3? The answer is that it was lost.

Strictly speaking whenever you do an integral you are supposed to add a constant of integration

- $ \int 2x = x^2 + c $

How do we know what c is? The answer is that we don't know! We cant know. Its an unknown so we just leave it as c. c is not a variable. Its a constant. "c" is short for constant.

As a beginner you shouldnt have to worry about the constant of integration. There are two reasons for this.

- For the types of problems you as a beginner will be asked to solve the constant will almost always be zero anyway.
- Even if the constant wasnt zero, whenever you do a definite integral the constant would cancel out anyway.

- $ \int_{1}^{2} 2x = (2^2 + c) - (1^2 + c) = 2^2 - 1^2 + c - c $

For now you dont need to worry about the constant of integration but in more advanced mathematics you will.

### Wedge product

The cross product of vectors **a** and **b** defines an axis of rotation which is perpendicular to both **a** and **b**.

But the cross product (and the curl) is only defined in 3 dimensions. In 4 or more dimensions one must use the wedge product^{w}. (The wedge product can also be used in 3 dimensions.)

The wedge product of vectors **a** and **b** defines a plane of rotation which contains both **a** and **b**.

- $ a \wedge b $

## Intermediate mathematics

## References

- ↑ Wikipedia:Euclidean vector
- ↑ Andrew Marx,
*Shortcut Algebra I: A Quick and Easy Way to Increase Your Algebra I Knowledge and Test Scores*, Publisher Kaplan Publishing, 2007, Template:ISBN, 9781419552885, 288 pages, page 51 - ↑
e
^{x}= y = dy/dx

dx = dy/y = 1/y * dy

∫ (1/y)dy = ∫ dx = x = ln(y)

- ↑ Wikipedia:Vector field