Understanding an example in Golan's “Linear Algebra”Why linear maps act like matrix multiplication?Change of basis of a linear map defined by non-square matrixLinear Algebra: Least-Squares Approximation & “Normal Equation”Linear Transformations, Linear AlgebraLinear transformation understandingFind the matrix of linear transformation.A difficulty in understanding the definition of “Spaces of Matrix Elements.”Linear alegbra question - an exampleDistribution of linear transformation and inverse linear transformation over union, intersection.

Temporarily moving a SQL Server 2016 database to SQL Server 2017 and then moving back. Is it possible?

Is there any actual security benefit to restricting foreign IPs?

How to ask a man to not take up more than one seat on public transport while avoiding conflict?

Cheap antenna for new HF HAM

US entry with tourist visa but past alcohol arrest

What was the deeper meaning of Hermione wanting the cloak?

Pseudo Game of Cups in Python

Is this a Sherman, and if so what model?

When does removing Goblin Warchief affect its cost reduction ability?

Safely hang a mirror that does not have hooks

How to make interviewee comfortable interviewing in lounge chairs

Can one guy with a duplicator initiate a nuclear apocalypse?

Wired to Wireless Doorbell

Is It Possible to Have Different Sea Levels, Eventually Causing New Landforms to Appear?

Which museums have artworks of all four ninja turtles' namesakes?

Would Taiwan and China's dispute be solved if Taiwan gave up being the Republic of China?

Nanomachines exist that enable Axolotl-levels of regeneration - So how can crippling injuries exist as well?

Can planetary bodies have a second axis of rotation?

Is the sentence "何でも忘れた" correct?

Asking an expert in your field that you have never met to review your manuscript

Norwegian refuses EU delay (4.7 hours) compensation because it turned out there was nothing wrong with the aircraft

How does one calculate the distribution of the Matt Colville way of rolling stats?

The 100 soldier problem

How to deal with my team leader who keeps calling me about project updates even though I am on leave for personal reasons?



Understanding an example in Golan's “Linear Algebra”


Why linear maps act like matrix multiplication?Change of basis of a linear map defined by non-square matrixLinear Algebra: Least-Squares Approximation & “Normal Equation”Linear Transformations, Linear AlgebraLinear transformation understandingFind the matrix of linear transformation.A difficulty in understanding the definition of “Spaces of Matrix Elements.”Linear alegbra question - an exampleDistribution of linear transformation and inverse linear transformation over union, intersection.






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2












$begingroup$


The example is given below:




enter image description here




But I do not understand the details of calculating $phi_BB(alpha_v)$, could anyone explain this for me please?



The definition of $phi_BB(alpha_v)$ is given below:




enter image description here




EDIT:
I mean how the definition of the linear transformation given affect the matrix?










share|cite|improve this question











$endgroup$




















    2












    $begingroup$


    The example is given below:




    enter image description here




    But I do not understand the details of calculating $phi_BB(alpha_v)$, could anyone explain this for me please?



    The definition of $phi_BB(alpha_v)$ is given below:




    enter image description here




    EDIT:
    I mean how the definition of the linear transformation given affect the matrix?










    share|cite|improve this question











    $endgroup$
















      2












      2








      2





      $begingroup$


      The example is given below:




      enter image description here




      But I do not understand the details of calculating $phi_BB(alpha_v)$, could anyone explain this for me please?



      The definition of $phi_BB(alpha_v)$ is given below:




      enter image description here




      EDIT:
      I mean how the definition of the linear transformation given affect the matrix?










      share|cite|improve this question











      $endgroup$




      The example is given below:




      enter image description here




      But I do not understand the details of calculating $phi_BB(alpha_v)$, could anyone explain this for me please?



      The definition of $phi_BB(alpha_v)$ is given below:




      enter image description here




      EDIT:
      I mean how the definition of the linear transformation given affect the matrix?







      linear-algebra matrices linear-transformations






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited 8 hours ago







      hopefully

















      asked 8 hours ago









      hopefullyhopefully

      7523 silver badges16 bronze badges




      7523 silver badges16 bronze badges























          5 Answers
          5






          active

          oldest

          votes


















          1














          $begingroup$

          Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.



          It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!



          Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.



          The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
          $$v = a_1 v_1 + ldots + a_n v_n.$$
          Then, using linearity,
          $$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$



          As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?



          To solve this, first express
          $$(2, 4) = 3(1, 1) + 1(1, -1)$$
          (note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
          $$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
          There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.



          That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
          $$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
          is bijective. This is related to the $Phi$ maps, but we still need to go one step further.



          Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
          $$w = b_1 w_1 + ldots + b_n w_n$$
          to its coordinate column vector
          $$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
          This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.



          So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
          $$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
          Equivalently, this list of $n$ column vectors could be thought of as a matrix:
          $$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
          This matrix is $Phi_BD$! The procedure can be summed up as follows:




          1. Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then

          2. Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,

          3. Put these column vectors into a single matrix.



          Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
          $$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
          where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.



          As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
          $$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
          which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.




          So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.



          First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
          beginalign*
          alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
          alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
          alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
          endalign*



          Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
          $$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
          In other words, we essentially just transpose these vectors to columns, giving us,
          $$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$



          Last step: put these in a matrix:



          $$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$






          share|cite|improve this answer









          $endgroup$














          • $begingroup$
            what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
            $endgroup$
            – hopefully
            18 mins ago



















          3














          $begingroup$

          With the equations of $alpha_v$:



          Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):



          $$beginvmatrix
          vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
          endvmatrix rightsquigarrow beginpmatrix
          a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
          endpmatrix=beginpmatrix
          0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
          endpmatrixbeginpmatrix
          x \y\z
          endpmatrix$$






          share|cite|improve this answer











          $endgroup$






















            2














            $begingroup$

            The details probably come in the proof of Theorem 8.1 (which you should read).



            Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
            $$
            alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
            $$

            Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.



            Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.






            share|cite|improve this answer









            $endgroup$






















              1














              $begingroup$

              The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
              and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.






              share|cite|improve this answer









              $endgroup$














              • $begingroup$
                I mean how the definition of the linear transformation given affect the matrix?
                $endgroup$
                – hopefully
                8 hours ago


















              1














              $begingroup$

              If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if



              $$ T(e_j) = sum_i=1^m a_i,jf_i, $$



              then the $j$-th column of $Phi_BD(T)$ is



              $$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$



              For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.






              share|cite|improve this answer









              $endgroup$

















                Your Answer








                StackExchange.ready(function()
                var channelOptions =
                tags: "".split(" "),
                id: "69"
                ;
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function()
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled)
                StackExchange.using("snippets", function()
                createEditor();
                );

                else
                createEditor();

                );

                function createEditor()
                StackExchange.prepareEditor(
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: true,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: 10,
                bindNavPrevention: true,
                postfix: "",
                imageUploader:
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                ,
                noCode: true, onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                );



                );














                draft saved

                draft discarded
















                StackExchange.ready(
                function ()
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3361293%2funderstanding-an-example-in-golans-linear-algebra%23new-answer', 'question_page');

                );

                Post as a guest















                Required, but never shown

























                5 Answers
                5






                active

                oldest

                votes








                5 Answers
                5






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                1














                $begingroup$

                Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.



                It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!



                Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.



                The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
                $$v = a_1 v_1 + ldots + a_n v_n.$$
                Then, using linearity,
                $$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$



                As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?



                To solve this, first express
                $$(2, 4) = 3(1, 1) + 1(1, -1)$$
                (note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
                $$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
                There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.



                That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
                $$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
                is bijective. This is related to the $Phi$ maps, but we still need to go one step further.



                Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
                $$w = b_1 w_1 + ldots + b_n w_n$$
                to its coordinate column vector
                $$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
                This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.



                So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
                $$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
                Equivalently, this list of $n$ column vectors could be thought of as a matrix:
                $$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
                This matrix is $Phi_BD$! The procedure can be summed up as follows:




                1. Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then

                2. Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,

                3. Put these column vectors into a single matrix.



                Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
                $$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
                where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.



                As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
                $$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
                which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.




                So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.



                First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
                beginalign*
                alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
                alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
                alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
                endalign*



                Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
                $$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
                In other words, we essentially just transpose these vectors to columns, giving us,
                $$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$



                Last step: put these in a matrix:



                $$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$






                share|cite|improve this answer









                $endgroup$














                • $begingroup$
                  what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
                  $endgroup$
                  – hopefully
                  18 mins ago
















                1














                $begingroup$

                Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.



                It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!



                Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.



                The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
                $$v = a_1 v_1 + ldots + a_n v_n.$$
                Then, using linearity,
                $$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$



                As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?



                To solve this, first express
                $$(2, 4) = 3(1, 1) + 1(1, -1)$$
                (note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
                $$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
                There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.



                That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
                $$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
                is bijective. This is related to the $Phi$ maps, but we still need to go one step further.



                Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
                $$w = b_1 w_1 + ldots + b_n w_n$$
                to its coordinate column vector
                $$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
                This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.



                So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
                $$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
                Equivalently, this list of $n$ column vectors could be thought of as a matrix:
                $$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
                This matrix is $Phi_BD$! The procedure can be summed up as follows:




                1. Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then

                2. Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,

                3. Put these column vectors into a single matrix.



                Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
                $$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
                where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.



                As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
                $$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
                which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.




                So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.



                First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
                beginalign*
                alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
                alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
                alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
                endalign*



                Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
                $$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
                In other words, we essentially just transpose these vectors to columns, giving us,
                $$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$



                Last step: put these in a matrix:



                $$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$






                share|cite|improve this answer









                $endgroup$














                • $begingroup$
                  what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
                  $endgroup$
                  – hopefully
                  18 mins ago














                1














                1










                1







                $begingroup$

                Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.



                It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!



                Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.



                The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
                $$v = a_1 v_1 + ldots + a_n v_n.$$
                Then, using linearity,
                $$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$



                As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?



                To solve this, first express
                $$(2, 4) = 3(1, 1) + 1(1, -1)$$
                (note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
                $$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
                There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.



                That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
                $$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
                is bijective. This is related to the $Phi$ maps, but we still need to go one step further.



                Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
                $$w = b_1 w_1 + ldots + b_n w_n$$
                to its coordinate column vector
                $$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
                This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.



                So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
                $$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
                Equivalently, this list of $n$ column vectors could be thought of as a matrix:
                $$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
                This matrix is $Phi_BD$! The procedure can be summed up as follows:




                1. Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then

                2. Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,

                3. Put these column vectors into a single matrix.



                Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
                $$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
                where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.



                As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
                $$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
                which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.




                So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.



                First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
                beginalign*
                alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
                alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
                alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
                endalign*



                Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
                $$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
                In other words, we essentially just transpose these vectors to columns, giving us,
                $$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$



                Last step: put these in a matrix:



                $$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$






                share|cite|improve this answer









                $endgroup$



                Part of the problem is that Proposition 8.1 is not a definition. It doesn't tell you what $Phi_BD$ is, or how to compute it. It simply asserts existence.



                It's also not particularly well-stated as a proposition, since it asserts the existence of a family of isomorphisms based on pairs of bases $(B, D)$ on $V$ and $W$ respectively, but doesn't specify any way in which said isomorphisms differ. If you could find just one (out of the infinitely many) isomorphisms between $operatornameHom(V, W)$ and $M_k times n(F)$ (call it $phi$), then letting $Phi_BD = phi$ would technically satisfy the proposition, and constitute a proof!



                Fortunately, I do know what the proposition is getting at. There is a very natural map $Phi_BD$, taking a linear map $alpha : V to W$, to a $k times n$ matrix.



                The fundamental, intuitive idea behind this map is the idea that linear maps are entirely determined by their action on a basis. Let's say you have a linear map $alpha : V to W$, and a basis $B = (v_1, ldots, v_n)$ of $V$. That is, every vector $v in V$ can be expressed uniquely as a linear combination of the vectors $v_1, ldots, v_n$. If we know the values of $alpha(v_1), ldots, alpha(v_n)$, then we essentially know the value of $alpha(v)$ for any $v$, through linearity. The process involves first finding the unique $a_1, ldots, a_n in F$ such that
                $$v = a_1 v_1 + ldots + a_n v_n.$$
                Then, using linearity,
                $$alpha(v) = alpha(a_1 v_1 + ldots + a_n v_n) = a_1 alpha(v_1) + ldots + a_n alpha(v_n).$$



                As an example of this principle in action, let's say that you had a linear map $alpha : BbbR^2 to BbbR^3$, and all you knew about $alpha$ was that $alpha(1, 1) = (2, -1, 1)$ and $alpha(1, -1) = (0, 0, 4)$. What would be the value of $alpha(2, 4)$?



                To solve this, first express
                $$(2, 4) = 3(1, 1) + 1(1, -1)$$
                (note that this linear combination is unique, since $((1, 1), (1, -1))$ is a basis for $BbbR^2$, and we could have done something similar for any vector, not just $(2, 4)$). Then,
                $$alpha(2, 4) = 3alpha(1, 1) + 1 alpha(1, -1) = 3(2, -1, 1) + 1(0, 0, 4) = (6, -3, 7).$$
                There is a converse to this principle too: if you start with a basis $(v_1, ldots, v_n)$ for $V$, and pick an arbitrary list of vectors $(w_1, ldots, w_n)$ from $W$ (not necessarily a basis), then there exists a unique linear transformation $alpha : V to W$ such that $alpha(v_i) = w_i$. So, you don't even need to assume an underlying linear transformation exists! Just map the basis vectors wherever you want in $W$, without restriction, and there will be a (unique) linear map that maps the basis in this way.



                That is, if we fix a basis $B = (v_1, ldots, v_n)$ of $V$, then we can make a bijective correspondence between the linear maps from $V$ to $W$, and lists of $n$ vectors in $W$. The map
                $$operatornameHom(V, W) to W^n : alpha mapsto (alpha(v_1), ldots, alpha(v_n))$$
                is bijective. This is related to the $Phi$ maps, but we still need to go one step further.



                Now, let's take a basis $D = (w_1, ldots, w_m)$ of $W$. That is, each vector in $W$ can be uniquely written as a linear combination of $w_1, ldots, w_m$. So, we have a natural map taking a vector
                $$w = b_1 w_1 + ldots + b_n w_n$$
                to its coordinate column vector
                $$[w]_D = beginbmatrix b_1 \ vdots \ b_n endbmatrix.$$
                This map is an isomorphism between $W$ and $F^m$; we lose no information if we choose to express vectors in $W$ this way.



                So, if we can express linear maps $alpha : V to W$ as a list of vectors in $W$, we could just as easily write this list of vectors in $W$ as a list of coordinate column vectors in $F^m$. Instead of thinking about $(alpha(v_1), ldots, alpha(v_n))$, think about
                $$([alpha(v_1)]_D, ldots, [alpha(v_n)]_D).$$
                Equivalently, this list of $n$ column vectors could be thought of as a matrix:
                $$left[beginarrayc & & \ [alpha(v_1)]_D & cdots & [alpha(v_n)]_D \ & & endarrayright].$$
                This matrix is $Phi_BD$! The procedure can be summed up as follows:




                1. Compute $alpha$ applied to each basis vector in $B$ (i.e. compute $alpha(v_1), ldots, alpha(v_n)$), then

                2. Compute the coordinate column vector of each of these transformed vectors with respect to the basis $D$ (i.e. $[alpha(v_1)]_D, ldots, [alpha(v_n)]_D$), and finally,

                3. Put these column vectors into a single matrix.



                Note that step 2 typically takes the longest. For each $alpha(v_i)$, you need to find (somehow) the scalars $b_i1, ldots, b_im$ such that
                $$alpha(v_i) = b_i1 w_1 + ldots + b_im w_m$$
                where $D = (w_1, ldots, w_m)$ is the basis for $W$. How to solve this will depend on what $W$ consists of (e.g. $k$-tuples of real numbers, polynomials, matrices, functions, etc), but it will almost always reduce to solving a system of linear equations in the field $F$.



                As for why we represent linear maps this way, I think you'd better read further in your textbook. It essentially comes down to the fact that, given any $v in V$,
                $$[alpha(v)]_D = Phi_BD(alpha) cdot [v]_B,$$
                which reduces the (potentially complex) process of applying an abstract linear transformation on an abstract vector $v in V$ down to simple matrix multiplication in $F$. I discuss this (with different notation) in this answer, but I suggest looking through your book first. Also, this answer has a nice diagram, but different notation again.




                So, let's get into your example. In this case, $B = D = ((1, 0, 0), (0, 1, 0), (0, 0, 1))$, a basis for $V = W = BbbR^3$. We have a fixed vector $w = (w_1, w_2, w_3)$ (which is $v$ in the question, but I've chosen to change it to $w$ and keep $v$ as our dummy variable). Our linear map is $alpha_w : BbbR^3 to BbbR^3$ such that $alpha_w(v) = w times v$. Let's follow the steps.



                First, we compute $alpha_w(1, 0, 0), alpha_w(0, 1, 0), alpha_w(0, 0, 1)$:
                beginalign*
                alpha_w(1, 0, 0) &= (w_1, w_2, w_3) times (1, 0, 0) = (0, w_3, -w_2) \
                alpha_w(0, 1, 0) &= (w_1, w_2, w_3) times (0, 1, 0) = (-w_3, 0, w_1) \
                alpha_w(0, 0, 1) &= (w_1, w_2, w_3) times (0, 0, 1) = (w_2, -w_1, 0).
                endalign*



                Second, we need to write these vectors as coordinate column vectors with respect to $B$. Fortunately, $B$ is the standard basis; we always have, for any $v = (a, b, c) in BbbR^3$,
                $$(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) implies [(a, b, c)]_B = beginbmatrix a \ b \ cendbmatrix.$$
                In other words, we essentially just transpose these vectors to columns, giving us,
                $$beginbmatrix 0 \ w_3 \ -w_2endbmatrix, beginbmatrix -w_3 \ 0 \ w_1endbmatrix, beginbmatrix w_2 \ -w_1 \ 0endbmatrix.$$



                Last step: put these in a matrix:



                $$Phi_BB(alpha_w) = beginbmatrix 0 & -w_3 & w_2 \ w_3 & 0 & -w_1 \ -w_2 & w_1 & 0 endbmatrix.$$







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 7 hours ago









                Theo BenditTheo Bendit

                28k1 gold badge28 silver badges66 bronze badges




                28k1 gold badge28 silver badges66 bronze badges














                • $begingroup$
                  what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
                  $endgroup$
                  – hopefully
                  18 mins ago

















                • $begingroup$
                  what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
                  $endgroup$
                  – hopefully
                  18 mins ago
















                $begingroup$
                what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
                $endgroup$
                – hopefully
                18 mins ago





                $begingroup$
                what about if we have 4 $2 times 2$ matrices what will be the second step and what will be the dimension of $phi_ (B, B)$ in this case?
                $endgroup$
                – hopefully
                18 mins ago














                3














                $begingroup$

                With the equations of $alpha_v$:



                Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):



                $$beginvmatrix
                vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
                endvmatrix rightsquigarrow beginpmatrix
                a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
                endpmatrix=beginpmatrix
                0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
                endpmatrixbeginpmatrix
                x \y\z
                endpmatrix$$






                share|cite|improve this answer











                $endgroup$



















                  3














                  $begingroup$

                  With the equations of $alpha_v$:



                  Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):



                  $$beginvmatrix
                  vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
                  endvmatrix rightsquigarrow beginpmatrix
                  a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
                  endpmatrix=beginpmatrix
                  0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
                  endpmatrixbeginpmatrix
                  x \y\z
                  endpmatrix$$






                  share|cite|improve this answer











                  $endgroup$

















                    3














                    3










                    3







                    $begingroup$

                    With the equations of $alpha_v$:



                    Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):



                    $$beginvmatrix
                    vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
                    endvmatrix rightsquigarrow beginpmatrix
                    a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
                    endpmatrix=beginpmatrix
                    0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
                    endpmatrixbeginpmatrix
                    x \y\z
                    endpmatrix$$






                    share|cite|improve this answer











                    $endgroup$



                    With the equations of $alpha_v$:



                    Let $:w=^mathrm tmkern-1.5mu(x, y,z)$. The coordinates of $vtimes w$ are obtained as the cofactors of the determinant (along the first row):



                    $$beginvmatrix
                    vec i&vec j&vec k \ a_1&a_2 & a_3 \ x&y&z
                    endvmatrix rightsquigarrow beginpmatrix
                    a_2z-a_3y\a_3x-a_1z \a_1y-a_2x
                    endpmatrix=beginpmatrix
                    0&-a_3&a_2\a_3& 0 &-a_1 \ -a_2 &a_1&0
                    endpmatrixbeginpmatrix
                    x \y\z
                    endpmatrix$$







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited 6 hours ago

























                    answered 8 hours ago









                    BernardBernard

                    133k7 gold badges43 silver badges126 bronze badges




                    133k7 gold badges43 silver badges126 bronze badges
























                        2














                        $begingroup$

                        The details probably come in the proof of Theorem 8.1 (which you should read).



                        Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
                        $$
                        alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
                        $$

                        Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.



                        Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.






                        share|cite|improve this answer









                        $endgroup$



















                          2














                          $begingroup$

                          The details probably come in the proof of Theorem 8.1 (which you should read).



                          Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
                          $$
                          alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
                          $$

                          Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.



                          Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.






                          share|cite|improve this answer









                          $endgroup$

















                            2














                            2










                            2







                            $begingroup$

                            The details probably come in the proof of Theorem 8.1 (which you should read).



                            Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
                            $$
                            alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
                            $$

                            Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.



                            Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.






                            share|cite|improve this answer









                            $endgroup$



                            The details probably come in the proof of Theorem 8.1 (which you should read).



                            Let $B = (v_1,dots,v_n)$ and $D = (w_1,dots,w_k)$ be the given bases. Suppose that $alphainoperatornameHom(V,W)$. For each $i$ in $1,dots,n$ there exist scalars $phi_ij in F$ such that
                            $$
                            alpha(v_i) = phi_1iw_1 + phi_2iw_2 + dots + phi_ki w_k
                            $$

                            Set $Phi_BD(alpha)$ to be the $ktimes n$ matrix whose $(i,j)$-th entry is $phi_ij$.



                            Now we come to angryavian's suggestion. Here $V = W = mathbbR^3$, and $B = D = (e_1,e_2,e_3)$. Moreover, $alpha(w) = v times w$ for a fixed $v = beginbmatrix a_1 \ a_2 \ a_3 endbmatrix$. So you need to find the coefficients of $alpha(e_1)$, $alpha(e_2)$ and $alpha(e_3)$ in the basis $(e_1,e_2,e_3)$.







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered 8 hours ago









                            Matthew LeingangMatthew Leingang

                            18.3k1 gold badge25 silver badges49 bronze badges




                            18.3k1 gold badge25 silver badges49 bronze badges
























                                1














                                $begingroup$

                                The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
                                and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.






                                share|cite|improve this answer









                                $endgroup$














                                • $begingroup$
                                  I mean how the definition of the linear transformation given affect the matrix?
                                  $endgroup$
                                  – hopefully
                                  8 hours ago















                                1














                                $begingroup$

                                The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
                                and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.






                                share|cite|improve this answer









                                $endgroup$














                                • $begingroup$
                                  I mean how the definition of the linear transformation given affect the matrix?
                                  $endgroup$
                                  – hopefully
                                  8 hours ago













                                1














                                1










                                1







                                $begingroup$

                                The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
                                and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.






                                share|cite|improve this answer









                                $endgroup$



                                The first column of the matrix is $v times beginbmatrix1 \ 0 \ 0endbmatrix$, the second column is $v times beginbmatrix0 \ 1 \ 0endbmatrix$,
                                and the third is $v times beginbmatrix0 \ 0 \ 1endbmatrix$.







                                share|cite|improve this answer












                                share|cite|improve this answer



                                share|cite|improve this answer










                                answered 8 hours ago









                                angryavianangryavian

                                45.7k3 gold badges36 silver badges87 bronze badges




                                45.7k3 gold badges36 silver badges87 bronze badges














                                • $begingroup$
                                  I mean how the definition of the linear transformation given affect the matrix?
                                  $endgroup$
                                  – hopefully
                                  8 hours ago
















                                • $begingroup$
                                  I mean how the definition of the linear transformation given affect the matrix?
                                  $endgroup$
                                  – hopefully
                                  8 hours ago















                                $begingroup$
                                I mean how the definition of the linear transformation given affect the matrix?
                                $endgroup$
                                – hopefully
                                8 hours ago




                                $begingroup$
                                I mean how the definition of the linear transformation given affect the matrix?
                                $endgroup$
                                – hopefully
                                8 hours ago











                                1














                                $begingroup$

                                If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if



                                $$ T(e_j) = sum_i=1^m a_i,jf_i, $$



                                then the $j$-th column of $Phi_BD(T)$ is



                                $$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$



                                For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.






                                share|cite|improve this answer









                                $endgroup$



















                                  1














                                  $begingroup$

                                  If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if



                                  $$ T(e_j) = sum_i=1^m a_i,jf_i, $$



                                  then the $j$-th column of $Phi_BD(T)$ is



                                  $$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$



                                  For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.






                                  share|cite|improve this answer









                                  $endgroup$

















                                    1














                                    1










                                    1







                                    $begingroup$

                                    If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if



                                    $$ T(e_j) = sum_i=1^m a_i,jf_i, $$



                                    then the $j$-th column of $Phi_BD(T)$ is



                                    $$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$



                                    For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.






                                    share|cite|improve this answer









                                    $endgroup$



                                    If $B = e_1,dots,e_n$ and $D = f_1,dots,f_m$ and $T$ is a linear transformation, then $Phi_BD(T)$ is obtained by applying $T$ to each element of $B$ and witting the result in terms of $f_1,dots,f_m$. That is, if



                                    $$ T(e_j) = sum_i=1^m a_i,jf_i, $$



                                    then the $j$-th column of $Phi_BD(T)$ is



                                    $$ beginbmatrix a_1,j \ a_2,j \ vdots \ a_m,j endbmatrix. $$



                                    For example, $alpha_v(e_1) = v times e_1 = [0,a_3,-a_2]^T = 0e_1 + a_3e_2 -a_2e_3$ so the first column of $Phi_BB(alpha_v)$ is $[0,a_3,-a_2]^T$.







                                    share|cite|improve this answer












                                    share|cite|improve this answer



                                    share|cite|improve this answer










                                    answered 8 hours ago









                                    Trevor GunnTrevor Gunn

                                    15.6k3 gold badges22 silver badges47 bronze badges




                                    15.6k3 gold badges22 silver badges47 bronze badges































                                        draft saved

                                        draft discarded















































                                        Thanks for contributing an answer to Mathematics Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid


                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.

                                        Use MathJax to format equations. MathJax reference.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function ()
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3361293%2funderstanding-an-example-in-golans-linear-algebra%23new-answer', 'question_page');

                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу

                                        Israel Cuprins Etimologie | Istorie | Geografie | Politică | Demografie | Educație | Economie | Cultură | Note explicative | Note bibliografice | Bibliografie | Legături externe | Meniu de navigaresite web oficialfacebooktweeterGoogle+Instagramcanal YouTubeInstagramtextmodificaremodificarewww.technion.ac.ilnew.huji.ac.ilwww.weizmann.ac.ilwww1.biu.ac.ilenglish.tau.ac.ilwww.haifa.ac.ilin.bgu.ac.ilwww.openu.ac.ilwww.ariel.ac.ilCIA FactbookHarta Israelului"Negotiating Jerusalem," Palestine–Israel JournalThe Schizoid Nature of Modern Hebrew: A Slavic Language in Search of a Semitic Past„Arabic in Israel: an official language and a cultural bridge”„Latest Population Statistics for Israel”„Israel Population”„Tables”„Report for Selected Countries and Subjects”Human Development Report 2016: Human Development for Everyone„Distribution of family income - Gini index”The World FactbookJerusalem Law„Israel”„Israel”„Zionist Leaders: David Ben-Gurion 1886–1973”„The status of Jerusalem”„Analysis: Kadima's big plans”„Israel's Hard-Learned Lessons”„The Legacy of Undefined Borders, Tel Aviv Notes No. 40, 5 iunie 2002”„Israel Journal: A Land Without Borders”„Population”„Israel closes decade with population of 7.5 million”Time Series-DataBank„Selected Statistics on Jerusalem Day 2007 (Hebrew)”Golan belongs to Syria, Druze protestGlobal Survey 2006: Middle East Progress Amid Global Gains in FreedomWHO: Life expectancy in Israel among highest in the worldInternational Monetary Fund, World Economic Outlook Database, April 2011: Nominal GDP list of countries. Data for the year 2010.„Israel's accession to the OECD”Popular Opinion„On the Move”Hosea 12:5„Walking the Bible Timeline”„Palestine: History”„Return to Zion”An invention called 'the Jewish people' – Haaretz – Israel NewsoriginalJewish and Non-Jewish Population of Palestine-Israel (1517–2004)ImmigrationJewishvirtuallibrary.orgChapter One: The Heralders of Zionism„The birth of modern Israel: A scrap of paper that changed history”„League of Nations: The Mandate for Palestine, 24 iulie 1922”The Population of Palestine Prior to 1948originalBackground Paper No. 47 (ST/DPI/SER.A/47)History: Foreign DominationTwo Hundred and Seventh Plenary Meeting„Israel (Labor Zionism)”Population, by Religion and Population GroupThe Suez CrisisAdolf EichmannJustice Ministry Reply to Amnesty International Report„The Interregnum”Israel Ministry of Foreign Affairs – The Palestinian National Covenant- July 1968Research on terrorism: trends, achievements & failuresThe Routledge Atlas of the Arab–Israeli conflict: The Complete History of the Struggle and the Efforts to Resolve It"George Habash, Palestinian Terrorism Tactician, Dies at 82."„1973: Arab states attack Israeli forces”Agranat Commission„Has Israel Annexed East Jerusalem?”original„After 4 Years, Intifada Still Smolders”From the End of the Cold War to 2001originalThe Oslo Accords, 1993Israel-PLO Recognition – Exchange of Letters between PM Rabin and Chairman Arafat – Sept 9- 1993Foundation for Middle East PeaceSources of Population Growth: Total Israeli Population and Settler Population, 1991–2003original„Israel marks Rabin assassination”The Wye River Memorandumoriginal„West Bank barrier route disputed, Israeli missile kills 2”"Permanent Ceasefire to Be Based on Creation Of Buffer Zone Free of Armed Personnel Other than UN, Lebanese Forces"„Hezbollah kills 8 soldiers, kidnaps two in offensive on northern border”„Olmert confirms peace talks with Syria”„Battleground Gaza: Israeli ground forces invade the strip”„IDF begins Gaza troop withdrawal, hours after ending 3-week offensive”„THE LAND: Geography and Climate”„Area of districts, sub-districts, natural regions and lakes”„Israel - Geography”„Makhteshim Country”Israel and the Palestinian Territories„Makhtesh Ramon”„The Living Dead Sea”„Temperatures reach record high in Pakistan”„Climate Extremes In Israel”Israel in figures„Deuteronom”„JNF: 240 million trees planted since 1901”„Vegetation of Israel and Neighboring Countries”Environmental Law in Israel„Executive branch”„Israel's election process explained”„The Electoral System in Israel”„Constitution for Israel”„All 120 incoming Knesset members”„Statul ISRAEL”„The Judiciary: The Court System”„Israel's high court unique in region”„Israel and the International Criminal Court: A Legal Battlefield”„Localities and population, by population group, district, sub-district and natural region”„Israel: Districts, Major Cities, Urban Localities & Metropolitan Areas”„Israel-Egypt Relations: Background & Overview of Peace Treaty”„Solana to Haaretz: New Rules of War Needed for Age of Terror”„Israel's Announcement Regarding Settlements”„United Nations Security Council Resolution 497”„Security Council resolution 478 (1980) on the status of Jerusalem”„Arabs will ask U.N. to seek razing of Israeli wall”„Olmert: Willing to trade land for peace”„Mapping Peace between Syria and Israel”„Egypt: Israel must accept the land-for-peace formula”„Israel: Age structure from 2005 to 2015”„Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition”10.1016/S0140-6736(15)61340-X„World Health Statistics 2014”„Life expectancy for Israeli men world's 4th highest”„Family Structure and Well-Being Across Israel's Diverse Population”„Fertility among Jewish and Muslim Women in Israel, by Level of Religiosity, 1979-2009”„Israel leaders in birth rate, but poverty major challenge”„Ethnic Groups”„Israel's population: Over 8.5 million”„Israel - Ethnic groups”„Jews, by country of origin and age”„Minority Communities in Israel: Background & Overview”„Israel”„Language in Israel”„Selected Data from the 2011 Social Survey on Mastery of the Hebrew Language and Usage of Languages”„Religions”„5 facts about Israeli Druze, a unique religious and ethnic group”„Israël”Israel Country Study Guide„Haredi city in Negev – blessing or curse?”„New town Harish harbors hopes of being more than another Pleasantville”„List of localities, in alphabetical order”„Muncitorii români, doriți în Israel”„Prietenia româno-israeliană la nevoie se cunoaște”„The Higher Education System in Israel”„Middle East”„Academic Ranking of World Universities 2016”„Israel”„Israel”„Jewish Nobel Prize Winners”„All Nobel Prizes in Literature”„All Nobel Peace Prizes”„All Prizes in Economic Sciences”„All Nobel Prizes in Chemistry”„List of Fields Medallists”„Sakharov Prize”„Țara care și-a sfidat "destinul" și se bate umăr la umăr cu Silicon Valley”„Apple's R&D center in Israel grew to about 800 employees”„Tim Cook: Apple's Herzliya R&D center second-largest in world”„Lecții de economie de la Israel”„Land use”Israel Investment and Business GuideA Country Study: IsraelCentral Bureau of StatisticsFlorin Diaconu, „Kadima: Flexibilitate și pragmatism, dar nici un compromis în chestiuni vitale", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 71-72Florin Diaconu, „Likud: Dreapta israeliană constant opusă retrocedării teritoriilor cureite prin luptă în 1967", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 73-74MassadaIsraelul a crescut in 50 de ani cât alte state intr-un mileniuIsrael Government PortalIsraelIsraelIsraelmmmmmXX451232cb118646298(data)4027808-634110000 0004 0372 0767n7900328503691455-bb46-37e3-91d2-cb064a35ffcc1003570400564274ge1294033523775214929302638955X146498911146498911

                                        Черчино Становништво Референце Спољашње везе Мени за навигацију46°09′29″ СГШ; 9°30′29″ ИГД / 46.15809° СГШ; 9.50814° ИГД / 46.15809; 9.5081446°09′29″ СГШ; 9°30′29″ ИГД / 46.15809° СГШ; 9.50814° ИГД / 46.15809; 9.508143179111„The GeoNames geographical database”„Istituto Nazionale di Statistica”Званични веб-сајтпроширитиуу