Understanding an example in Golan's “Linear Algebra”Why linear maps act like matrix multiplication?Change of basis of a linear map defined by non-square matrixLinear Algebra: Least-Squares Approximation & “Normal Equation”Linear Transformations, Linear AlgebraLinear transformation understandingFind the matrix of linear transformation.A difficulty in understanding the definition of “Spaces of Matrix Elements.”Linear alegbra question - an exampleDistribution of linear transformation and inverse linear transformation over union, intersection.
Temporarily moving a SQL Server 2016 database to SQL Server 2017 and then moving back. Is it possible?
Is there any actual security benefit to restricting foreign IPs?
How to ask a man to not take up more than one seat on public transport while avoiding conflict?
Cheap antenna for new HF HAM
US entry with tourist visa but past alcohol arrest
What was the deeper meaning of Hermione wanting the cloak?
Pseudo Game of Cups in Python
Is this a Sherman, and if so what model?
When does removing Goblin Warchief affect its cost reduction ability?
Safely hang a mirror that does not have hooks
How to make interviewee comfortable interviewing in lounge chairs
Can one guy with a duplicator initiate a nuclear apocalypse?
Wired to Wireless Doorbell
Is It Possible to Have Different Sea Levels, Eventually Causing New Landforms to Appear?
Which museums have artworks of all four ninja turtles' namesakes?
Would Taiwan and China's dispute be solved if Taiwan gave up being the Republic of China?
Nanomachines exist that enable Axolotl-levels of regeneration - So how can crippling injuries exist as well?
Can planetary bodies have a second axis of rotation?
Is the sentence "何でも忘れた" correct?
Asking an expert in your field that you have never met to review your manuscript
Norwegian refuses EU delay (4.7 hours) compensation because it turned out there was nothing wrong with the aircraft
How does one calculate the distribution of the Matt Colville way of rolling stats?
The 100 soldier problem
How to deal with my team leader who keeps calling me about project updates even though I am on leave for personal reasons?
Understanding an example in Golan's “Linear Algebra”
Why linear maps act like matrix multiplication?Change of basis of a linear map defined by non-square matrixLinear Algebra: Least-Squares Approximation & “Normal Equation”Linear Transformations, Linear AlgebraLinear transformation understandingFind the matrix of linear transformation.A difficulty in understanding the definition of “Spaces of Matrix Elements.”Linear alegbra question - an exampleDistribution of linear transformation and inverse linear transformation over union, intersection.
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
The example is given below:
But I do not understand the details of calculating $phi_BB(alpha_v)$, could anyone explain this for me please?
The definition of $phi_BB(alpha_v)$ is given below:
EDIT:
I mean how the definition of the linear transformation given affect the matrix?
linear-algebra matrices linear-transformations
$endgroup$
add a comment
|
$begingroup$
The example is given below:
But I do not understand the details of calculating $phi_BB(alpha_v)$, could anyone explain this for me please?
The definition of $phi_BB(alpha_v)$ is given below:
EDIT:
I mean how the definition of the linear transformation given affect the matrix?
linear-algebra matrices linear-transformations
$endgroup$
add a comment
|
$begingroup$
The example is given below:
But I do not understand the details of calculating $phi_BB(alpha_v)$, could anyone explain this for me please?
The definition of $phi_BB(alpha_v)$ is given below:
EDIT:
I mean how the definition of the linear transformation given affect the matrix?
linear-algebra matrices linear-transformations
$endgroup$
The example is given below:
But I do not understand the details of calculating $phi_BB(alpha_v)$, could anyone explain this for me please?
The definition of $phi_BB(alpha_v)$ is given below:
EDIT:
I mean how the definition of the linear transformation given affect the matrix?
linear-algebra matrices linear-transformations
linear-algebra matrices linear-transformations
edited 8 hours ago
hopefully
asked 8 hours ago
hopefullyhopefully
7523 silver badges16 bronze badges
7523 silver badges16 bronze badges
add a comment
|
add a comment
|
5 Answers
5
active
oldest
votes
$begingroup$
Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.
It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!
Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.
The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
$$v = a_1 v_1 + ldots + a_n v_n.$$
Then, using linearity,
$$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$
As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?
To solve this, first express
$$(2, 4) = 3(1, 1) + 1(1, -1)$$
(note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
$$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.
That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
$$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
is bijective. This is related to the $Phi$ maps, but we still need to go one step further.
Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
$$w = b_1 w_1 + ldots + b_n w_n$$
to its coordinate column vector
$$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.
So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
$$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
Equivalently, this list of $n$ column vectors could be thought of as a matrix:
$$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
This matrix is $Phi_BD$! The procedure can be summed up as follows:
- Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then
- Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,
- Put these column vectors into a single matrix.
Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
$$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.
As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
$$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.
So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.
First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
beginalign*
alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
endalign*
Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
$$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
In other words, we essentially just transpose these vectors to columns, giving us,
$$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$
Last step: put these in a matrix:
$$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$
$endgroup$
$begingroup$
what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
$endgroup$
– hopefully
18 mins ago
add a comment
|
$begingroup$
With the equations of $alpha_v$:
Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):
$$beginvmatrix
vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
endvmatrix rightsquigarrow beginpmatrix
a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
endpmatrix=beginpmatrix
0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
endpmatrixbeginpmatrix
x \y\z
endpmatrix$$
$endgroup$
add a comment
|
$begingroup$
The details probably come in the proof of Theorem 8.1 (which you should read).
Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
$$
alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
$$
Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.
Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.
$endgroup$
add a comment
|
$begingroup$
The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.
$endgroup$
$begingroup$
I mean how the definition of the linear transformation given affect the matrix?
$endgroup$
– hopefully
8 hours ago
add a comment
|
$begingroup$
If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if
$$ T(e_j) = sum_i=1^m a_i,jf_i, $$
then the $j$-th column of $Phi_BD(T)$ is
$$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$
For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.
$endgroup$
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "69"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3361293%2funderstanding-an-example-in-golans-linear-algebra%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.
It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!
Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.
The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
$$v = a_1 v_1 + ldots + a_n v_n.$$
Then, using linearity,
$$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$
As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?
To solve this, first express
$$(2, 4) = 3(1, 1) + 1(1, -1)$$
(note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
$$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.
That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
$$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
is bijective. This is related to the $Phi$ maps, but we still need to go one step further.
Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
$$w = b_1 w_1 + ldots + b_n w_n$$
to its coordinate column vector
$$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.
So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
$$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
Equivalently, this list of $n$ column vectors could be thought of as a matrix:
$$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
This matrix is $Phi_BD$! The procedure can be summed up as follows:
- Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then
- Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,
- Put these column vectors into a single matrix.
Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
$$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.
As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
$$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.
So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.
First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
beginalign*
alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
endalign*
Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
$$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
In other words, we essentially just transpose these vectors to columns, giving us,
$$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$
Last step: put these in a matrix:
$$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$
$endgroup$
$begingroup$
what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
$endgroup$
– hopefully
18 mins ago
add a comment
|
$begingroup$
Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.
It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!
Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.
The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
$$v = a_1 v_1 + ldots + a_n v_n.$$
Then, using linearity,
$$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$
As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?
To solve this, first express
$$(2, 4) = 3(1, 1) + 1(1, -1)$$
(note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
$$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.
That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
$$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
is bijective. This is related to the $Phi$ maps, but we still need to go one step further.
Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
$$w = b_1 w_1 + ldots + b_n w_n$$
to its coordinate column vector
$$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.
So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
$$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
Equivalently, this list of $n$ column vectors could be thought of as a matrix:
$$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
This matrix is $Phi_BD$! The procedure can be summed up as follows:
- Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then
- Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,
- Put these column vectors into a single matrix.
Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
$$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.
As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
$$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.
So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.
First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
beginalign*
alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
endalign*
Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
$$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
In other words, we essentially just transpose these vectors to columns, giving us,
$$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$
Last step: put these in a matrix:
$$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$
$endgroup$
$begingroup$
what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
$endgroup$
– hopefully
18 mins ago
add a comment
|
$begingroup$
Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.
It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!
Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.
The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
$$v = a_1 v_1 + ldots + a_n v_n.$$
Then, using linearity,
$$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$
As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?
To solve this, first express
$$(2, 4) = 3(1, 1) + 1(1, -1)$$
(note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
$$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.
That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
$$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
is bijective. This is related to the $Phi$ maps, but we still need to go one step further.
Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
$$w = b_1 w_1 + ldots + b_n w_n$$
to its coordinate column vector
$$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.
So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
$$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
Equivalently, this list of $n$ column vectors could be thought of as a matrix:
$$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
This matrix is $Phi_BD$! The procedure can be summed up as follows:
- Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then
- Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,
- Put these column vectors into a single matrix.
Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
$$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.
As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
$$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.
So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.
First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
beginalign*
alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
endalign*
Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
$$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
In other words, we essentially just transpose these vectors to columns, giving us,
$$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$
Last step: put these in a matrix:
$$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$
$endgroup$
Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.
It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!
Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.
The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
$$v = a_1 v_1 + ldots + a_n v_n.$$
Then, using linearity,
$$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$
As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?
To solve this, first express
$$(2, 4) = 3(1, 1) + 1(1, -1)$$
(note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
$$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.
That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
$$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
is bijective. This is related to the $Phi$ maps, but we still need to go one step further.
Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
$$w = b_1 w_1 + ldots + b_n w_n$$
to its coordinate column vector
$$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.
So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
$$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
Equivalently, this list of $n$ column vectors could be thought of as a matrix:
$$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
This matrix is $Phi_BD$! The procedure can be summed up as follows:
- Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then
- Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,
- Put these column vectors into a single matrix.
Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
$$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.
As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
$$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.
So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.
First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
beginalign*
alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
endalign*
Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
$$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
In other words, we essentially just transpose these vectors to columns, giving us,
$$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$
Last step: put these in a matrix:
$$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$
answered 7 hours ago
Theo BenditTheo Bendit
28k1 gold badge28 silver badges66 bronze badges
28k1 gold badge28 silver badges66 bronze badges
$begingroup$
what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
$endgroup$
– hopefully
18 mins ago
add a comment
|
$begingroup$
what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
$endgroup$
– hopefully
18 mins ago
$begingroup$
what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
$endgroup$
– hopefully
18 mins ago
$begingroup$
what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
$endgroup$
– hopefully
18 mins ago
add a comment
|
$begingroup$
With the equations of $alpha_v$:
Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):
$$beginvmatrix
vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
endvmatrix rightsquigarrow beginpmatrix
a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
endpmatrix=beginpmatrix
0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
endpmatrixbeginpmatrix
x \y\z
endpmatrix$$
$endgroup$
add a comment
|
$begingroup$
With the equations of $alpha_v$:
Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):
$$beginvmatrix
vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
endvmatrix rightsquigarrow beginpmatrix
a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
endpmatrix=beginpmatrix
0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
endpmatrixbeginpmatrix
x \y\z
endpmatrix$$
$endgroup$
add a comment
|
$begingroup$
With the equations of $alpha_v$:
Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):
$$beginvmatrix
vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
endvmatrix rightsquigarrow beginpmatrix
a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
endpmatrix=beginpmatrix
0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
endpmatrixbeginpmatrix
x \y\z
endpmatrix$$
$endgroup$
With the equations of $alpha_v$:
Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):
$$beginvmatrix
vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
endvmatrix rightsquigarrow beginpmatrix
a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
endpmatrix=beginpmatrix
0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
endpmatrixbeginpmatrix
x \y\z
endpmatrix$$
edited 6 hours ago
answered 8 hours ago
BernardBernard
133k7 gold badges43 silver badges126 bronze badges
133k7 gold badges43 silver badges126 bronze badges
add a comment
|
add a comment
|
$begingroup$
The details probably come in the proof of Theorem 8.1 (which you should read).
Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
$$
alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
$$
Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.
Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.
$endgroup$
add a comment
|
$begingroup$
The details probably come in the proof of Theorem 8.1 (which you should read).
Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
$$
alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
$$
Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.
Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.
$endgroup$
add a comment
|
$begingroup$
The details probably come in the proof of Theorem 8.1 (which you should read).
Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
$$
alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
$$
Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.
Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.
$endgroup$
The details probably come in the proof of Theorem 8.1 (which you should read).
Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
$$
alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
$$
Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.
Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.
answered 8 hours ago
Matthew LeingangMatthew Leingang
18.3k1 gold badge25 silver badges49 bronze badges
18.3k1 gold badge25 silver badges49 bronze badges
add a comment
|
add a comment
|
$begingroup$
The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.
$endgroup$
$begingroup$
I mean how the definition of the linear transformation given affect the matrix?
$endgroup$
– hopefully
8 hours ago
add a comment
|
$begingroup$
The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.
$endgroup$
$begingroup$
I mean how the definition of the linear transformation given affect the matrix?
$endgroup$
– hopefully
8 hours ago
add a comment
|
$begingroup$
The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.
$endgroup$
The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.
answered 8 hours ago
angryavianangryavian
45.7k3 gold badges36 silver badges87 bronze badges
45.7k3 gold badges36 silver badges87 bronze badges
$begingroup$
I mean how the definition of the linear transformation given affect the matrix?
$endgroup$
– hopefully
8 hours ago
add a comment
|
$begingroup$
I mean how the definition of the linear transformation given affect the matrix?
$endgroup$
– hopefully
8 hours ago
$begingroup$
I mean how the definition of the linear transformation given affect the matrix?
$endgroup$
– hopefully
8 hours ago
$begingroup$
I mean how the definition of the linear transformation given affect the matrix?
$endgroup$
– hopefully
8 hours ago
add a comment
|
$begingroup$
If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if
$$ T(e_j) = sum_i=1^m a_i,jf_i, $$
then the $j$-th column of $Phi_BD(T)$ is
$$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$
For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.
$endgroup$
add a comment
|
$begingroup$
If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if
$$ T(e_j) = sum_i=1^m a_i,jf_i, $$
then the $j$-th column of $Phi_BD(T)$ is
$$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$
For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.
$endgroup$
add a comment
|
$begingroup$
If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if
$$ T(e_j) = sum_i=1^m a_i,jf_i, $$
then the $j$-th column of $Phi_BD(T)$ is
$$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$
For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.
$endgroup$
If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if
$$ T(e_j) = sum_i=1^m a_i,jf_i, $$
then the $j$-th column of $Phi_BD(T)$ is
$$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$
For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.
answered 8 hours ago
Trevor GunnTrevor Gunn
15.6k3 gold badges22 silver badges47 bronze badges
15.6k3 gold badges22 silver badges47 bronze badges
add a comment
|
add a comment
|
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3361293%2funderstanding-an-example-in-golans-linear-algebra%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown