Where did this useful matrix decomposition come from for Nodal Analysis? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Ebers–Moll model aF? Where does it come from?Singular matrix in nodal analysis?Does the resistor values used for opamp circuits come from the equations?Finding current over controllable voltage source using nodal analysisDetermine Y-parameters for given circuit. [Used Nodal Analysis]

Monty Hall Problem-Probability Paradox

Delete free apps from library

A proverb that is used to imply that you have unexpectedly faced a big problem

How to write capital alpha?

How can I prevent/balance waiting and turtling as a response to cooldown mechanics

I got rid of Mac OSX and replaced it with linux but now I can't change it back to OSX or windows

What are the main differences between the original Stargate SG-1 and the Final Cut edition?

The Nth Gryphon Number

Found this skink in my tomato plant bucket. Is he trapped? Or could he leave if he wanted?

In musical terms, what properties are varied by the human voice to produce different words / syllables?

Google .dev domain strangely redirects to https

Why is std::move not [[nodiscard]] in C++20?

What would you call this weird metallic apparatus that allows you to lift people?

Why shouldn't this prove the Prime Number Theorem?

How can a team of shapeshifters communicate?

Random body shuffle every night—can we still function?

Should a wizard buy fine inks every time he want to copy spells into his spellbook?

Resize vertical bars (absolute-value symbols)

Trying to understand entropy as a novice in thermodynamics

What does it mean that physics no longer uses mechanical models to describe phenomena?

Did Mueller's report provide an evidentiary basis for the claim of Russian govt election interference via social media?

How to ask rejected full-time candidates to apply to teach individual courses?

Weaponising the Grasp-at-a-Distance spell

Universal covering space of the real projective line?



Where did this useful matrix decomposition come from for Nodal Analysis?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Ebers–Moll model aF? Where does it come from?Singular matrix in nodal analysis?Does the resistor values used for opamp circuits come from the equations?Finding current over controllable voltage source using nodal analysisDetermine Y-parameters for given circuit. [Used Nodal Analysis]



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


Background



The equations formed when finding the nodal voltages of a circuit can be expressed using nodal analysis as a square system matrix $mathbfS$ (lets say $mtimes m$) which describes the connections and values of the conductances that correspond to these connections, and can express a whole circuit as
$$
mathbfSv = mathbfi
$$

where $mathbfv$ is the collection of nodal voltages and $mathbfi$ are the input current sources.



Super useful matrix decomposition



In this paper, I have seen this decomposed into (for a single impedance type, e.g. resistance)
$$
mathbfS = mathbfN G N^mathrmT
$$

where $mathbfN$ specifies the connections, and is an $mtimes m$ incidence matrix which contain only values of 1, 0 and -1, and $mathbfG$ is an $mtimes m$ diagonal matrix containing the conductance values.



This is a ridiculously useful property as it separates the conductances from the connections making them both easily readable. No matrix decompositions I've read up on have made it clear how this works or how you'd intuitively think to apply this decomposition. Could someone explain this?



Notes



The paper actually uses modified nodal analysis but this doesn't change the application as the decomposition is only used on the nodal aspects of the circuit, not the voltage sources.










share|improve this question









$endgroup$


















    1












    $begingroup$


    Background



    The equations formed when finding the nodal voltages of a circuit can be expressed using nodal analysis as a square system matrix $mathbfS$ (lets say $mtimes m$) which describes the connections and values of the conductances that correspond to these connections, and can express a whole circuit as
    $$
    mathbfSv = mathbfi
    $$

    where $mathbfv$ is the collection of nodal voltages and $mathbfi$ are the input current sources.



    Super useful matrix decomposition



    In this paper, I have seen this decomposed into (for a single impedance type, e.g. resistance)
    $$
    mathbfS = mathbfN G N^mathrmT
    $$

    where $mathbfN$ specifies the connections, and is an $mtimes m$ incidence matrix which contain only values of 1, 0 and -1, and $mathbfG$ is an $mtimes m$ diagonal matrix containing the conductance values.



    This is a ridiculously useful property as it separates the conductances from the connections making them both easily readable. No matrix decompositions I've read up on have made it clear how this works or how you'd intuitively think to apply this decomposition. Could someone explain this?



    Notes



    The paper actually uses modified nodal analysis but this doesn't change the application as the decomposition is only used on the nodal aspects of the circuit, not the voltage sources.










    share|improve this question









    $endgroup$














      1












      1








      1





      $begingroup$


      Background



      The equations formed when finding the nodal voltages of a circuit can be expressed using nodal analysis as a square system matrix $mathbfS$ (lets say $mtimes m$) which describes the connections and values of the conductances that correspond to these connections, and can express a whole circuit as
      $$
      mathbfSv = mathbfi
      $$

      where $mathbfv$ is the collection of nodal voltages and $mathbfi$ are the input current sources.



      Super useful matrix decomposition



      In this paper, I have seen this decomposed into (for a single impedance type, e.g. resistance)
      $$
      mathbfS = mathbfN G N^mathrmT
      $$

      where $mathbfN$ specifies the connections, and is an $mtimes m$ incidence matrix which contain only values of 1, 0 and -1, and $mathbfG$ is an $mtimes m$ diagonal matrix containing the conductance values.



      This is a ridiculously useful property as it separates the conductances from the connections making them both easily readable. No matrix decompositions I've read up on have made it clear how this works or how you'd intuitively think to apply this decomposition. Could someone explain this?



      Notes



      The paper actually uses modified nodal analysis but this doesn't change the application as the decomposition is only used on the nodal aspects of the circuit, not the voltage sources.










      share|improve this question









      $endgroup$




      Background



      The equations formed when finding the nodal voltages of a circuit can be expressed using nodal analysis as a square system matrix $mathbfS$ (lets say $mtimes m$) which describes the connections and values of the conductances that correspond to these connections, and can express a whole circuit as
      $$
      mathbfSv = mathbfi
      $$

      where $mathbfv$ is the collection of nodal voltages and $mathbfi$ are the input current sources.



      Super useful matrix decomposition



      In this paper, I have seen this decomposed into (for a single impedance type, e.g. resistance)
      $$
      mathbfS = mathbfN G N^mathrmT
      $$

      where $mathbfN$ specifies the connections, and is an $mtimes m$ incidence matrix which contain only values of 1, 0 and -1, and $mathbfG$ is an $mtimes m$ diagonal matrix containing the conductance values.



      This is a ridiculously useful property as it separates the conductances from the connections making them both easily readable. No matrix decompositions I've read up on have made it clear how this works or how you'd intuitively think to apply this decomposition. Could someone explain this?



      Notes



      The paper actually uses modified nodal analysis but this doesn't change the application as the decomposition is only used on the nodal aspects of the circuit, not the voltage sources.







      math nodal-analysis






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 6 hours ago









      loudnoisesloudnoises

      1,382920




      1,382920




















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          According to the document you linked, it appears to me that $mathbfN$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



          This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



          For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



          #include <stdio.h>
          #include <stdlib.h>
          typedef enum false= 0, true= 1 bool_t;
          void hamPrint( int n, int *path )
          int i;
          for ( i= 0; i < n; ++i )
          printf( " %d ", path[i] );
          printf( " %dn", path[0] );
          return;

          bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos )
          int i;
          if ( graph[ path[pos-1]*n + v ] == false ) return false;
          for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
          return true;

          bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos )
          int v;
          if ( pos == n )
          return graph[ path[pos-1]*n + path[0] ];
          for ( v= 1; v < n; ++v )
          if ( hamOkay( n, v, graph, path, pos ) )
          path[pos]= v;
          if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
          return true;
          path[pos]= -1;

          return false;

          bool_t hamCycleExist( int n, bool_t *graph )
          bool_t stat;
          int i, *path= (int *) malloc( sizeof(int) * n );
          if ( path == NULL ) return false;
          for ( i= 0; i < n; ++i )
          path[i]= -1;
          path[0]= 0;
          stat= hamCycleSolver( n, graph, path, 1 );
          if ( stat == true ) hamPrint( n, path );
          free( path );
          return stat;

          bool_t graph[][5]= */
          0, 1, 0, 0, 1 , /* ;
          int main( void )
          if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
          printf( "Graph is Hamiltoniann" );
          else
          printf( "Graph is not Hamiltoniann" );
          return 0;



          Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



          If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



          In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
          Graph representations.



          I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.






          share|improve this answer











          $endgroup$




















            1












            $begingroup$

            I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



            I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



            Consider a time-invariant, noiseless MIMO channel. This can be represented as.



            $
            mathbfy = H(omega)mathbfx
            $



            Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



            The SVD decomposes H into



            $
            H = ULambda V^*
            $



            Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



            Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



            This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



            $
            mathbfy = U^*ULambda V^*Vmathbfx \
            mathbfy = Lambda mathbfx
            $



            Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



            The SVD also has some interesting applications in image processing



            Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.






            share|improve this answer









            $endgroup$













              Your Answer






              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("schematics", function ()
              StackExchange.schematics.init();
              );
              , "cicuitlab");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "135"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f433641%2fwhere-did-this-useful-matrix-decomposition-come-from-for-nodal-analysis%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              1












              $begingroup$

              According to the document you linked, it appears to me that $mathbfN$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



              This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



              For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



              #include <stdio.h>
              #include <stdlib.h>
              typedef enum false= 0, true= 1 bool_t;
              void hamPrint( int n, int *path )
              int i;
              for ( i= 0; i < n; ++i )
              printf( " %d ", path[i] );
              printf( " %dn", path[0] );
              return;

              bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos )
              int i;
              if ( graph[ path[pos-1]*n + v ] == false ) return false;
              for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
              return true;

              bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos )
              int v;
              if ( pos == n )
              return graph[ path[pos-1]*n + path[0] ];
              for ( v= 1; v < n; ++v )
              if ( hamOkay( n, v, graph, path, pos ) )
              path[pos]= v;
              if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
              return true;
              path[pos]= -1;

              return false;

              bool_t hamCycleExist( int n, bool_t *graph )
              bool_t stat;
              int i, *path= (int *) malloc( sizeof(int) * n );
              if ( path == NULL ) return false;
              for ( i= 0; i < n; ++i )
              path[i]= -1;
              path[0]= 0;
              stat= hamCycleSolver( n, graph, path, 1 );
              if ( stat == true ) hamPrint( n, path );
              free( path );
              return stat;

              bool_t graph[][5]= */
              0, 1, 0, 0, 1 , /* ;
              int main( void )
              if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
              printf( "Graph is Hamiltoniann" );
              else
              printf( "Graph is not Hamiltoniann" );
              return 0;



              Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



              If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



              In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
              Graph representations.



              I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.






              share|improve this answer











              $endgroup$

















                1












                $begingroup$

                According to the document you linked, it appears to me that $mathbfN$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



                This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



                For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



                #include <stdio.h>
                #include <stdlib.h>
                typedef enum false= 0, true= 1 bool_t;
                void hamPrint( int n, int *path )
                int i;
                for ( i= 0; i < n; ++i )
                printf( " %d ", path[i] );
                printf( " %dn", path[0] );
                return;

                bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos )
                int i;
                if ( graph[ path[pos-1]*n + v ] == false ) return false;
                for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
                return true;

                bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos )
                int v;
                if ( pos == n )
                return graph[ path[pos-1]*n + path[0] ];
                for ( v= 1; v < n; ++v )
                if ( hamOkay( n, v, graph, path, pos ) )
                path[pos]= v;
                if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
                return true;
                path[pos]= -1;

                return false;

                bool_t hamCycleExist( int n, bool_t *graph )
                bool_t stat;
                int i, *path= (int *) malloc( sizeof(int) * n );
                if ( path == NULL ) return false;
                for ( i= 0; i < n; ++i )
                path[i]= -1;
                path[0]= 0;
                stat= hamCycleSolver( n, graph, path, 1 );
                if ( stat == true ) hamPrint( n, path );
                free( path );
                return stat;

                bool_t graph[][5]= */
                0, 1, 0, 0, 1 , /* ;
                int main( void )
                if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
                printf( "Graph is Hamiltoniann" );
                else
                printf( "Graph is not Hamiltoniann" );
                return 0;



                Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



                If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



                In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
                Graph representations.



                I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.






                share|improve this answer











                $endgroup$















                  1












                  1








                  1





                  $begingroup$

                  According to the document you linked, it appears to me that $mathbfN$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



                  This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



                  For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



                  #include <stdio.h>
                  #include <stdlib.h>
                  typedef enum false= 0, true= 1 bool_t;
                  void hamPrint( int n, int *path )
                  int i;
                  for ( i= 0; i < n; ++i )
                  printf( " %d ", path[i] );
                  printf( " %dn", path[0] );
                  return;

                  bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos )
                  int i;
                  if ( graph[ path[pos-1]*n + v ] == false ) return false;
                  for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
                  return true;

                  bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos )
                  int v;
                  if ( pos == n )
                  return graph[ path[pos-1]*n + path[0] ];
                  for ( v= 1; v < n; ++v )
                  if ( hamOkay( n, v, graph, path, pos ) )
                  path[pos]= v;
                  if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
                  return true;
                  path[pos]= -1;

                  return false;

                  bool_t hamCycleExist( int n, bool_t *graph )
                  bool_t stat;
                  int i, *path= (int *) malloc( sizeof(int) * n );
                  if ( path == NULL ) return false;
                  for ( i= 0; i < n; ++i )
                  path[i]= -1;
                  path[0]= 0;
                  stat= hamCycleSolver( n, graph, path, 1 );
                  if ( stat == true ) hamPrint( n, path );
                  free( path );
                  return stat;

                  bool_t graph[][5]= */
                  0, 1, 0, 0, 1 , /* ;
                  int main( void )
                  if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
                  printf( "Graph is Hamiltoniann" );
                  else
                  printf( "Graph is not Hamiltoniann" );
                  return 0;



                  Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



                  If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



                  In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
                  Graph representations.



                  I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.






                  share|improve this answer











                  $endgroup$



                  According to the document you linked, it appears to me that $mathbfN$ isn't $mtimes m$. Instead, it has one row per two-terminal circuit element (from a quick reading) and one column for each circuit node.



                  This technique has been used for decades in computing to create connections. I've used them for finding Hamiltonian cycles in graphs, for example. It's a really simple way of expressing connections.



                  For example, here's a 35-year old piece of code I wrote to test out a method for finding the existence of such cycles:



                  #include <stdio.h>
                  #include <stdlib.h>
                  typedef enum false= 0, true= 1 bool_t;
                  void hamPrint( int n, int *path )
                  int i;
                  for ( i= 0; i < n; ++i )
                  printf( " %d ", path[i] );
                  printf( " %dn", path[0] );
                  return;

                  bool_t hamOkay( int n, int v, bool_t *graph, int *path, int pos )
                  int i;
                  if ( graph[ path[pos-1]*n + v ] == false ) return false;
                  for ( i= 0; i < pos; ++i ) if ( path[i] == v ) return false;
                  return true;

                  bool_t hamCycleSolver( int n, bool_t *graph, int *path, int pos )
                  int v;
                  if ( pos == n )
                  return graph[ path[pos-1]*n + path[0] ];
                  for ( v= 1; v < n; ++v )
                  if ( hamOkay( n, v, graph, path, pos ) )
                  path[pos]= v;
                  if ( hamCycleSolver( n, graph, path, pos+1 ) == true )
                  return true;
                  path[pos]= -1;

                  return false;

                  bool_t hamCycleExist( int n, bool_t *graph )
                  bool_t stat;
                  int i, *path= (int *) malloc( sizeof(int) * n );
                  if ( path == NULL ) return false;
                  for ( i= 0; i < n; ++i )
                  path[i]= -1;
                  path[0]= 0;
                  stat= hamCycleSolver( n, graph, path, 1 );
                  if ( stat == true ) hamPrint( n, path );
                  free( path );
                  return stat;

                  bool_t graph[][5]= */
                  0, 1, 0, 0, 1 , /* ;
                  int main( void )
                  if ( hamCycleExist( sizeof(graph)/sizeof(graph[0]), (bool_t *) graph ) )
                  printf( "Graph is Hamiltoniann" );
                  else
                  printf( "Graph is not Hamiltoniann" );
                  return 0;



                  Take note of the use of a connection matrix in the matrix graph. In this case, the connections must be specified in both directions. So there are "1"s specified to connect, for example, node 0 to node 1 and also node 1 to node 0. So it's easy to change this matrix to specify a path from node 0 to node 1 without specifying a path from node 1 to node 0, here. I just didn't do that, in the above case. All connections there are explicitly arranged to work in both directions.



                  If interested, you can simply multiply such a matrix by an appropriate vector to get a vector of connections for each entry in the appropriate vector, too.



                  In any case, here is a web page I readily found on google that may also help demonstrate that these ideas have been around for a long time and are in regular use:
                  Graph representations.



                  I had simply borrowed the idea, myself. I didn't invent it. So it pre-dates my use. And that means it is practically ancient. ;) I wouldn't be the least bit surprised to hear it dates into the 1800's.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited 5 hours ago

























                  answered 5 hours ago









                  jonkjonk

                  35.3k12876




                  35.3k12876























                      1












                      $begingroup$

                      I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



                      I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



                      Consider a time-invariant, noiseless MIMO channel. This can be represented as.



                      $
                      mathbfy = H(omega)mathbfx
                      $



                      Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



                      The SVD decomposes H into



                      $
                      H = ULambda V^*
                      $



                      Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



                      Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



                      This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



                      $
                      mathbfy = U^*ULambda V^*Vmathbfx \
                      mathbfy = Lambda mathbfx
                      $



                      Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



                      The SVD also has some interesting applications in image processing



                      Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.






                      share|improve this answer









                      $endgroup$

















                        1












                        $begingroup$

                        I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



                        I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



                        Consider a time-invariant, noiseless MIMO channel. This can be represented as.



                        $
                        mathbfy = H(omega)mathbfx
                        $



                        Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



                        The SVD decomposes H into



                        $
                        H = ULambda V^*
                        $



                        Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



                        Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



                        This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



                        $
                        mathbfy = U^*ULambda V^*Vmathbfx \
                        mathbfy = Lambda mathbfx
                        $



                        Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



                        The SVD also has some interesting applications in image processing



                        Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.






                        share|improve this answer









                        $endgroup$















                          1












                          1








                          1





                          $begingroup$

                          I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



                          I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



                          Consider a time-invariant, noiseless MIMO channel. This can be represented as.



                          $
                          mathbfy = H(omega)mathbfx
                          $



                          Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



                          The SVD decomposes H into



                          $
                          H = ULambda V^*
                          $



                          Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



                          Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



                          This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



                          $
                          mathbfy = U^*ULambda V^*Vmathbfx \
                          mathbfy = Lambda mathbfx
                          $



                          Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



                          The SVD also has some interesting applications in image processing



                          Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.






                          share|improve this answer









                          $endgroup$



                          I'm no mathematician but I feel strongly this is related to the singular value decomposition (SVD) or eigendecomposition.



                          I first came across SVD in the context of modelling MIMO communication systems, particularly those using spatial multiplexing. I'll try to detail this to explain why I think it relates to your problem which I am not able to answer directly.



                          Consider a time-invariant, noiseless MIMO channel. This can be represented as.



                          $
                          mathbfy = H(omega)mathbfx
                          $



                          Where H is a matrix of transfer functions between the various parallel channels. Ideally, H would be diagonal and there would be no coupling between each channel. The presence of off-diagonal entries means that equalization will be required to prevent the channels interfering.



                          The SVD decomposes H into



                          $
                          H = ULambda V^*
                          $



                          Where $U$ and $V$ can be thought of as rotations and $Lambda $ is a diagonal matrix that simply scales each channel individually. $U$ and $V$ are both unitary matrices, so their inverses are their conjugate transposes. The columns of U and V also form orthonormal basis, so they can be thought of as the natural 'coordinate system' for solving the problem.



                          Intuitively, it takes the input channels, which are not orthogonal, and applies a transformation at the input and output that makes the behavior of the channel very simple, just attenuation (the matrix $Lambda $).



                          This has application to equalization, if we pre-multiply our input signals with $V$, pass it through the channel, and apply $U^*$ to the output. We get,



                          $
                          mathbfy = U^*ULambda V^*Vmathbfx \
                          mathbfy = Lambda mathbfx
                          $



                          Which gives us completely orthogonal channels that do not interfere. This reminds me very much of your problem, the connection matrices being the natural orthogonal basis to use, and the conductances simply scaling these.



                          The SVD also has some interesting applications in image processing



                          Edit: The decomposition in question is definitely an eigenvalue decomposition, of which the SVD can be thought of as a generalization.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered 3 hours ago









                          jramsay42jramsay42

                          595127




                          595127



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Electrical Engineering Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f433641%2fwhere-did-this-useful-matrix-decomposition-come-from-for-nodal-analysis%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу

                              Israel Cuprins Etimologie | Istorie | Geografie | Politică | Demografie | Educație | Economie | Cultură | Note explicative | Note bibliografice | Bibliografie | Legături externe | Meniu de navigaresite web oficialfacebooktweeterGoogle+Instagramcanal YouTubeInstagramtextmodificaremodificarewww.technion.ac.ilnew.huji.ac.ilwww.weizmann.ac.ilwww1.biu.ac.ilenglish.tau.ac.ilwww.haifa.ac.ilin.bgu.ac.ilwww.openu.ac.ilwww.ariel.ac.ilCIA FactbookHarta Israelului"Negotiating Jerusalem," Palestine–Israel JournalThe Schizoid Nature of Modern Hebrew: A Slavic Language in Search of a Semitic Past„Arabic in Israel: an official language and a cultural bridge”„Latest Population Statistics for Israel”„Israel Population”„Tables”„Report for Selected Countries and Subjects”Human Development Report 2016: Human Development for Everyone„Distribution of family income - Gini index”The World FactbookJerusalem Law„Israel”„Israel”„Zionist Leaders: David Ben-Gurion 1886–1973”„The status of Jerusalem”„Analysis: Kadima's big plans”„Israel's Hard-Learned Lessons”„The Legacy of Undefined Borders, Tel Aviv Notes No. 40, 5 iunie 2002”„Israel Journal: A Land Without Borders”„Population”„Israel closes decade with population of 7.5 million”Time Series-DataBank„Selected Statistics on Jerusalem Day 2007 (Hebrew)”Golan belongs to Syria, Druze protestGlobal Survey 2006: Middle East Progress Amid Global Gains in FreedomWHO: Life expectancy in Israel among highest in the worldInternational Monetary Fund, World Economic Outlook Database, April 2011: Nominal GDP list of countries. Data for the year 2010.„Israel's accession to the OECD”Popular Opinion„On the Move”Hosea 12:5„Walking the Bible Timeline”„Palestine: History”„Return to Zion”An invention called 'the Jewish people' – Haaretz – Israel NewsoriginalJewish and Non-Jewish Population of Palestine-Israel (1517–2004)ImmigrationJewishvirtuallibrary.orgChapter One: The Heralders of Zionism„The birth of modern Israel: A scrap of paper that changed history”„League of Nations: The Mandate for Palestine, 24 iulie 1922”The Population of Palestine Prior to 1948originalBackground Paper No. 47 (ST/DPI/SER.A/47)History: Foreign DominationTwo Hundred and Seventh Plenary Meeting„Israel (Labor Zionism)”Population, by Religion and Population GroupThe Suez CrisisAdolf EichmannJustice Ministry Reply to Amnesty International Report„The Interregnum”Israel Ministry of Foreign Affairs – The Palestinian National Covenant- July 1968Research on terrorism: trends, achievements & failuresThe Routledge Atlas of the Arab–Israeli conflict: The Complete History of the Struggle and the Efforts to Resolve It"George Habash, Palestinian Terrorism Tactician, Dies at 82."„1973: Arab states attack Israeli forces”Agranat Commission„Has Israel Annexed East Jerusalem?”original„After 4 Years, Intifada Still Smolders”From the End of the Cold War to 2001originalThe Oslo Accords, 1993Israel-PLO Recognition – Exchange of Letters between PM Rabin and Chairman Arafat – Sept 9- 1993Foundation for Middle East PeaceSources of Population Growth: Total Israeli Population and Settler Population, 1991–2003original„Israel marks Rabin assassination”The Wye River Memorandumoriginal„West Bank barrier route disputed, Israeli missile kills 2”"Permanent Ceasefire to Be Based on Creation Of Buffer Zone Free of Armed Personnel Other than UN, Lebanese Forces"„Hezbollah kills 8 soldiers, kidnaps two in offensive on northern border”„Olmert confirms peace talks with Syria”„Battleground Gaza: Israeli ground forces invade the strip”„IDF begins Gaza troop withdrawal, hours after ending 3-week offensive”„THE LAND: Geography and Climate”„Area of districts, sub-districts, natural regions and lakes”„Israel - Geography”„Makhteshim Country”Israel and the Palestinian Territories„Makhtesh Ramon”„The Living Dead Sea”„Temperatures reach record high in Pakistan”„Climate Extremes In Israel”Israel in figures„Deuteronom”„JNF: 240 million trees planted since 1901”„Vegetation of Israel and Neighboring Countries”Environmental Law in Israel„Executive branch”„Israel's election process explained”„The Electoral System in Israel”„Constitution for Israel”„All 120 incoming Knesset members”„Statul ISRAEL”„The Judiciary: The Court System”„Israel's high court unique in region”„Israel and the International Criminal Court: A Legal Battlefield”„Localities and population, by population group, district, sub-district and natural region”„Israel: Districts, Major Cities, Urban Localities & Metropolitan Areas”„Israel-Egypt Relations: Background & Overview of Peace Treaty”„Solana to Haaretz: New Rules of War Needed for Age of Terror”„Israel's Announcement Regarding Settlements”„United Nations Security Council Resolution 497”„Security Council resolution 478 (1980) on the status of Jerusalem”„Arabs will ask U.N. to seek razing of Israeli wall”„Olmert: Willing to trade land for peace”„Mapping Peace between Syria and Israel”„Egypt: Israel must accept the land-for-peace formula”„Israel: Age structure from 2005 to 2015”„Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition”10.1016/S0140-6736(15)61340-X„World Health Statistics 2014”„Life expectancy for Israeli men world's 4th highest”„Family Structure and Well-Being Across Israel's Diverse Population”„Fertility among Jewish and Muslim Women in Israel, by Level of Religiosity, 1979-2009”„Israel leaders in birth rate, but poverty major challenge”„Ethnic Groups”„Israel's population: Over 8.5 million”„Israel - Ethnic groups”„Jews, by country of origin and age”„Minority Communities in Israel: Background & Overview”„Israel”„Language in Israel”„Selected Data from the 2011 Social Survey on Mastery of the Hebrew Language and Usage of Languages”„Religions”„5 facts about Israeli Druze, a unique religious and ethnic group”„Israël”Israel Country Study Guide„Haredi city in Negev – blessing or curse?”„New town Harish harbors hopes of being more than another Pleasantville”„List of localities, in alphabetical order”„Muncitorii români, doriți în Israel”„Prietenia româno-israeliană la nevoie se cunoaște”„The Higher Education System in Israel”„Middle East”„Academic Ranking of World Universities 2016”„Israel”„Israel”„Jewish Nobel Prize Winners”„All Nobel Prizes in Literature”„All Nobel Peace Prizes”„All Prizes in Economic Sciences”„All Nobel Prizes in Chemistry”„List of Fields Medallists”„Sakharov Prize”„Țara care și-a sfidat "destinul" și se bate umăr la umăr cu Silicon Valley”„Apple's R&D center in Israel grew to about 800 employees”„Tim Cook: Apple's Herzliya R&D center second-largest in world”„Lecții de economie de la Israel”„Land use”Israel Investment and Business GuideA Country Study: IsraelCentral Bureau of StatisticsFlorin Diaconu, „Kadima: Flexibilitate și pragmatism, dar nici un compromis în chestiuni vitale", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 71-72Florin Diaconu, „Likud: Dreapta israeliană constant opusă retrocedării teritoriilor cureite prin luptă în 1967", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 73-74MassadaIsraelul a crescut in 50 de ani cât alte state intr-un mileniuIsrael Government PortalIsraelIsraelIsraelmmmmmXX451232cb118646298(data)4027808-634110000 0004 0372 0767n7900328503691455-bb46-37e3-91d2-cb064a35ffcc1003570400564274ge1294033523775214929302638955X146498911146498911

                              Smell Mother Skizze Discussion Tachometer Jar Alligator Star 끌다 자세 의문 과학적t Barbaric The round system critiques the connection. Definition: A wind instrument of music in use among the Spaniards Nasty Level 이상 분노 금년 월급 근교 Cloth Owner Permissible Shock Purring Parched Raise 오전 장면 햄 서투르다 The smash instructs the squeamish instrument. Large Nosy Nalpure Chalk Travel Crayon Bite your tongue The Hulk 신호 대사 사과하다 The work boosts the knowledgeable size. Steeplump Level Wooden Shake Teaching Jump 이제 복도 접다 공중전화 부지런하다 Rub Average Ruthless Busyglide Glost oven Didelphia Control A fly on the wall Jaws 지하철 거