How to feed LSTM with different input array sizes?2019 Community Moderator ElectionLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?

Download, install and reboot computer at night if needed

Accidentally leaked the solution to an assignment, what to do now? (I'm the prof)

Why is the design of haulage companies so “special”?

How can I fix this gap between bookcases I made?

"which" command doesn't work / path of Safari?

Is there a minimum number of transactions in a block?

Pronouncing Dictionary.com's W.O.D "vade mecum" in English

Why Is Death Allowed In the Matrix?

How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?

How can the DM most effectively choose 1 out of an odd number of players to be targeted by an attack or effect?

Circuitry of TV splitters

Shell script can be run only with sh command

How long does it take to type this?

Is it possible to do 50 km distance without any previous training?

I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine

How do you conduct xenoanthropology after first contact?

Japan - Plan around max visa duration

Set-theoretical foundations of Mathematics with only bounded quantifiers

What defenses are there against being summoned by the Gate spell?

Why is this code 6.5x slower with optimizations enabled?

"You are your self first supporter", a more proper way to say it

A Journey Through Space and Time

Why are 150k or 200k jobs considered good when there are 300k+ births a month?

What do you call a Matrix-like slowdown and camera movement effect?



How to feed LSTM with different input array sizes?



2019 Community Moderator ElectionLSTM input in KerasKeras: Built-In Multi-Layer ShortcutKeras- LSTM answers different sizeHow is PACF analysis output related to LSTM ?How to design a LSTM network with different number of input/output units?3 dimensional array as input with Embedding Layer and LSTM in KerasHow to design a many-to-many LSTM?Can I use an array as a model feature?LSTM cell input dimensionalityHow to fix setting an array element with a sequence error?










3












$begingroup$


If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



I am using Keras implementation of LSTM.










share|improve this question









$endgroup$
















    3












    $begingroup$


    If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



    For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



    I am using Keras implementation of LSTM.










    share|improve this question









    $endgroup$














      3












      3








      3


      1



      $begingroup$


      If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



      For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



      I am using Keras implementation of LSTM.










      share|improve this question









      $endgroup$




      If I like to write a LSTM network and feed it by different input array sizes, how is it possible?



      For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?



      I am using Keras implementation of LSTM.







      keras lstm






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 18 hours ago









      user145959user145959

      1438




      1438




















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



          Padding the sequences:



          You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



          The values are padded mostly by the value of 0. You can do this in Keras with :



          y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


          • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


          • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






          share|improve this answer









          $endgroup$




















            1












            $begingroup$

            The easiest way is to use Padding and Masking.



            There are three general ways to handle variable-length sequences:



            1. Padding and masking (which can be used for (3)),

            2. Batch size = 1, and

            3. Batch size > 1, with equi-length samples in each batch.

            Padding and masking



            In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



            X = [

            [[1, 1.1],
            [0.9, 0.95]], # sequence 1 (2 timestamps)

            [[2, 2.2],
            [1.9, 1.95],
            [1.8, 1.85]], # sequence 2 (3 timestamps)

            ]


            will be converted to



            X2 = [

            [[1, 1.1],
            [0.9, 0.95],
            [-10, -10]], # padded sequence 1 (3 timestamps)

            [[2, 2.2],
            [1.9, 1.95],
            [1.8, 1.85]], # sequence 2 (3 timestamps)
            ]


            This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



            For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



            model.add(LSTM(units, input_shape=(None, dimension)))


            this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



            I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



            model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
            model.add(LSTM(lstm_units))


            where first dimension of input_shape in Masking is again None to allow batches with different lengths.



            Here is the code for cases (1) and (2):



            from keras import Sequential
            from keras.utils import Sequence
            from keras.layers import LSTM, Dense, Masking
            import numpy as np


            class MyBatchGenerator(Sequence):
            'Generates data for Keras'
            def __init__(self, X, y, batch_size=1, shuffle=True):
            'Initialization'
            self.X = X
            self.y = y
            self.batch_size = batch_size
            self.shuffle = shuffle
            self.on_epoch_end()

            def __len__(self):
            'Denotes the number of batches per epoch'
            return int(np.floor(len(self.y)/self.batch_size))

            def __getitem__(self, index):
            return self.__data_generation(index)

            def on_epoch_end(self):
            'Shuffles indexes after each epoch'
            self.indexes = np.arange(len(self.y))
            if self.shuffle == True:
            np.random.shuffle(self.indexes)

            def __data_generation(self, index):
            Xb = np.empty((self.batch_size, *X[index].shape))
            yb = np.empty((self.batch_size, *y[index].shape))
            # naively use the same sample over and over again
            for s in range(0, self.batch_size):
            Xb[s] = X[index]
            yb[s] = y[index]
            return Xb, yb


            # Parameters
            N = 1000
            halfN = int(N/2)
            dimension = 2
            lstm_units = 3

            # Data
            np.random.seed(123) # to generate the same numbers
            # create sequence lengths between 1 to 10
            seq_lens = np.random.randint(1, 10, halfN)
            X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
            y_zero = np.zeros((halfN, 1))
            X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
            y_one = np.ones((halfN, 1))
            p = np.random.permutation(N) # to shuffle zero and one classes
            X = np.concatenate((X_zero, X_one))[p]
            y = np.concatenate((y_zero, y_one))[p]

            # Batch = 1
            model = Sequential()
            model.add(LSTM(lstm_units, input_shape=(None, dimension)))
            model.add(Dense(1, activation='sigmoid'))
            model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
            print(model.summary())
            model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

            # Padding and Masking
            special_value = -10.0
            max_seq_len = max(seq_lens)
            Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
            for s, x in enumerate(X):
            seq_len = x.shape[0]
            Xpad[s, 0:seq_len, :] = x
            model2 = Sequential()
            model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
            model2.add(LSTM(lstm_units))
            model2.add(Dense(1, activation='sigmoid'))
            model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
            print(model2.summary())
            model2.fit(Xpad, y, epochs=50, batch_size=32)


            Extra notes



            1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.





            share|improve this answer











            $endgroup$












            • $begingroup$
              Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
              $endgroup$
              – user145959
              5 hours ago










            • $begingroup$
              @user145959 my pleasure! I added a note at the end.
              $endgroup$
              – Esmailian
              3 hours ago











            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$

            We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



            Padding the sequences:



            You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



            The values are padded mostly by the value of 0. You can do this in Keras with :



            y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


            • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


            • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






            share|improve this answer









            $endgroup$

















              1












              $begingroup$

              We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



              Padding the sequences:



              You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



              The values are padded mostly by the value of 0. You can do this in Keras with :



              y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


              • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


              • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






              share|improve this answer









              $endgroup$















                1












                1








                1





                $begingroup$

                We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



                Padding the sequences:



                You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



                The values are padded mostly by the value of 0. You can do this in Keras with :



                y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


                • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


                • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.






                share|improve this answer









                $endgroup$



                We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.



                Padding the sequences:



                You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.



                The values are padded mostly by the value of 0. You can do this in Keras with :



                y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )


                • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.


                • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 16 hours ago









                Shubham PanchalShubham Panchal

                37118




                37118





















                    1












                    $begingroup$

                    The easiest way is to use Padding and Masking.



                    There are three general ways to handle variable-length sequences:



                    1. Padding and masking (which can be used for (3)),

                    2. Batch size = 1, and

                    3. Batch size > 1, with equi-length samples in each batch.

                    Padding and masking



                    In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



                    X = [

                    [[1, 1.1],
                    [0.9, 0.95]], # sequence 1 (2 timestamps)

                    [[2, 2.2],
                    [1.9, 1.95],
                    [1.8, 1.85]], # sequence 2 (3 timestamps)

                    ]


                    will be converted to



                    X2 = [

                    [[1, 1.1],
                    [0.9, 0.95],
                    [-10, -10]], # padded sequence 1 (3 timestamps)

                    [[2, 2.2],
                    [1.9, 1.95],
                    [1.8, 1.85]], # sequence 2 (3 timestamps)
                    ]


                    This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



                    For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



                    model.add(LSTM(units, input_shape=(None, dimension)))


                    this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



                    I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



                    model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                    model.add(LSTM(lstm_units))


                    where first dimension of input_shape in Masking is again None to allow batches with different lengths.



                    Here is the code for cases (1) and (2):



                    from keras import Sequential
                    from keras.utils import Sequence
                    from keras.layers import LSTM, Dense, Masking
                    import numpy as np


                    class MyBatchGenerator(Sequence):
                    'Generates data for Keras'
                    def __init__(self, X, y, batch_size=1, shuffle=True):
                    'Initialization'
                    self.X = X
                    self.y = y
                    self.batch_size = batch_size
                    self.shuffle = shuffle
                    self.on_epoch_end()

                    def __len__(self):
                    'Denotes the number of batches per epoch'
                    return int(np.floor(len(self.y)/self.batch_size))

                    def __getitem__(self, index):
                    return self.__data_generation(index)

                    def on_epoch_end(self):
                    'Shuffles indexes after each epoch'
                    self.indexes = np.arange(len(self.y))
                    if self.shuffle == True:
                    np.random.shuffle(self.indexes)

                    def __data_generation(self, index):
                    Xb = np.empty((self.batch_size, *X[index].shape))
                    yb = np.empty((self.batch_size, *y[index].shape))
                    # naively use the same sample over and over again
                    for s in range(0, self.batch_size):
                    Xb[s] = X[index]
                    yb[s] = y[index]
                    return Xb, yb


                    # Parameters
                    N = 1000
                    halfN = int(N/2)
                    dimension = 2
                    lstm_units = 3

                    # Data
                    np.random.seed(123) # to generate the same numbers
                    # create sequence lengths between 1 to 10
                    seq_lens = np.random.randint(1, 10, halfN)
                    X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
                    y_zero = np.zeros((halfN, 1))
                    X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
                    y_one = np.ones((halfN, 1))
                    p = np.random.permutation(N) # to shuffle zero and one classes
                    X = np.concatenate((X_zero, X_one))[p]
                    y = np.concatenate((y_zero, y_one))[p]

                    # Batch = 1
                    model = Sequential()
                    model.add(LSTM(lstm_units, input_shape=(None, dimension)))
                    model.add(Dense(1, activation='sigmoid'))
                    model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                    print(model.summary())
                    model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

                    # Padding and Masking
                    special_value = -10.0
                    max_seq_len = max(seq_lens)
                    Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
                    for s, x in enumerate(X):
                    seq_len = x.shape[0]
                    Xpad[s, 0:seq_len, :] = x
                    model2 = Sequential()
                    model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
                    model2.add(LSTM(lstm_units))
                    model2.add(Dense(1, activation='sigmoid'))
                    model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                    print(model2.summary())
                    model2.fit(Xpad, y, epochs=50, batch_size=32)


                    Extra notes



                    1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.





                    share|improve this answer











                    $endgroup$












                    • $begingroup$
                      Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
                      $endgroup$
                      – user145959
                      5 hours ago










                    • $begingroup$
                      @user145959 my pleasure! I added a note at the end.
                      $endgroup$
                      – Esmailian
                      3 hours ago















                    1












                    $begingroup$

                    The easiest way is to use Padding and Masking.



                    There are three general ways to handle variable-length sequences:



                    1. Padding and masking (which can be used for (3)),

                    2. Batch size = 1, and

                    3. Batch size > 1, with equi-length samples in each batch.

                    Padding and masking



                    In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



                    X = [

                    [[1, 1.1],
                    [0.9, 0.95]], # sequence 1 (2 timestamps)

                    [[2, 2.2],
                    [1.9, 1.95],
                    [1.8, 1.85]], # sequence 2 (3 timestamps)

                    ]


                    will be converted to



                    X2 = [

                    [[1, 1.1],
                    [0.9, 0.95],
                    [-10, -10]], # padded sequence 1 (3 timestamps)

                    [[2, 2.2],
                    [1.9, 1.95],
                    [1.8, 1.85]], # sequence 2 (3 timestamps)
                    ]


                    This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



                    For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



                    model.add(LSTM(units, input_shape=(None, dimension)))


                    this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



                    I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



                    model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                    model.add(LSTM(lstm_units))


                    where first dimension of input_shape in Masking is again None to allow batches with different lengths.



                    Here is the code for cases (1) and (2):



                    from keras import Sequential
                    from keras.utils import Sequence
                    from keras.layers import LSTM, Dense, Masking
                    import numpy as np


                    class MyBatchGenerator(Sequence):
                    'Generates data for Keras'
                    def __init__(self, X, y, batch_size=1, shuffle=True):
                    'Initialization'
                    self.X = X
                    self.y = y
                    self.batch_size = batch_size
                    self.shuffle = shuffle
                    self.on_epoch_end()

                    def __len__(self):
                    'Denotes the number of batches per epoch'
                    return int(np.floor(len(self.y)/self.batch_size))

                    def __getitem__(self, index):
                    return self.__data_generation(index)

                    def on_epoch_end(self):
                    'Shuffles indexes after each epoch'
                    self.indexes = np.arange(len(self.y))
                    if self.shuffle == True:
                    np.random.shuffle(self.indexes)

                    def __data_generation(self, index):
                    Xb = np.empty((self.batch_size, *X[index].shape))
                    yb = np.empty((self.batch_size, *y[index].shape))
                    # naively use the same sample over and over again
                    for s in range(0, self.batch_size):
                    Xb[s] = X[index]
                    yb[s] = y[index]
                    return Xb, yb


                    # Parameters
                    N = 1000
                    halfN = int(N/2)
                    dimension = 2
                    lstm_units = 3

                    # Data
                    np.random.seed(123) # to generate the same numbers
                    # create sequence lengths between 1 to 10
                    seq_lens = np.random.randint(1, 10, halfN)
                    X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
                    y_zero = np.zeros((halfN, 1))
                    X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
                    y_one = np.ones((halfN, 1))
                    p = np.random.permutation(N) # to shuffle zero and one classes
                    X = np.concatenate((X_zero, X_one))[p]
                    y = np.concatenate((y_zero, y_one))[p]

                    # Batch = 1
                    model = Sequential()
                    model.add(LSTM(lstm_units, input_shape=(None, dimension)))
                    model.add(Dense(1, activation='sigmoid'))
                    model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                    print(model.summary())
                    model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

                    # Padding and Masking
                    special_value = -10.0
                    max_seq_len = max(seq_lens)
                    Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
                    for s, x in enumerate(X):
                    seq_len = x.shape[0]
                    Xpad[s, 0:seq_len, :] = x
                    model2 = Sequential()
                    model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
                    model2.add(LSTM(lstm_units))
                    model2.add(Dense(1, activation='sigmoid'))
                    model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                    print(model2.summary())
                    model2.fit(Xpad, y, epochs=50, batch_size=32)


                    Extra notes



                    1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.





                    share|improve this answer











                    $endgroup$












                    • $begingroup$
                      Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
                      $endgroup$
                      – user145959
                      5 hours ago










                    • $begingroup$
                      @user145959 my pleasure! I added a note at the end.
                      $endgroup$
                      – Esmailian
                      3 hours ago













                    1












                    1








                    1





                    $begingroup$

                    The easiest way is to use Padding and Masking.



                    There are three general ways to handle variable-length sequences:



                    1. Padding and masking (which can be used for (3)),

                    2. Batch size = 1, and

                    3. Batch size > 1, with equi-length samples in each batch.

                    Padding and masking



                    In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



                    X = [

                    [[1, 1.1],
                    [0.9, 0.95]], # sequence 1 (2 timestamps)

                    [[2, 2.2],
                    [1.9, 1.95],
                    [1.8, 1.85]], # sequence 2 (3 timestamps)

                    ]


                    will be converted to



                    X2 = [

                    [[1, 1.1],
                    [0.9, 0.95],
                    [-10, -10]], # padded sequence 1 (3 timestamps)

                    [[2, 2.2],
                    [1.9, 1.95],
                    [1.8, 1.85]], # sequence 2 (3 timestamps)
                    ]


                    This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



                    For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



                    model.add(LSTM(units, input_shape=(None, dimension)))


                    this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



                    I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



                    model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                    model.add(LSTM(lstm_units))


                    where first dimension of input_shape in Masking is again None to allow batches with different lengths.



                    Here is the code for cases (1) and (2):



                    from keras import Sequential
                    from keras.utils import Sequence
                    from keras.layers import LSTM, Dense, Masking
                    import numpy as np


                    class MyBatchGenerator(Sequence):
                    'Generates data for Keras'
                    def __init__(self, X, y, batch_size=1, shuffle=True):
                    'Initialization'
                    self.X = X
                    self.y = y
                    self.batch_size = batch_size
                    self.shuffle = shuffle
                    self.on_epoch_end()

                    def __len__(self):
                    'Denotes the number of batches per epoch'
                    return int(np.floor(len(self.y)/self.batch_size))

                    def __getitem__(self, index):
                    return self.__data_generation(index)

                    def on_epoch_end(self):
                    'Shuffles indexes after each epoch'
                    self.indexes = np.arange(len(self.y))
                    if self.shuffle == True:
                    np.random.shuffle(self.indexes)

                    def __data_generation(self, index):
                    Xb = np.empty((self.batch_size, *X[index].shape))
                    yb = np.empty((self.batch_size, *y[index].shape))
                    # naively use the same sample over and over again
                    for s in range(0, self.batch_size):
                    Xb[s] = X[index]
                    yb[s] = y[index]
                    return Xb, yb


                    # Parameters
                    N = 1000
                    halfN = int(N/2)
                    dimension = 2
                    lstm_units = 3

                    # Data
                    np.random.seed(123) # to generate the same numbers
                    # create sequence lengths between 1 to 10
                    seq_lens = np.random.randint(1, 10, halfN)
                    X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
                    y_zero = np.zeros((halfN, 1))
                    X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
                    y_one = np.ones((halfN, 1))
                    p = np.random.permutation(N) # to shuffle zero and one classes
                    X = np.concatenate((X_zero, X_one))[p]
                    y = np.concatenate((y_zero, y_one))[p]

                    # Batch = 1
                    model = Sequential()
                    model.add(LSTM(lstm_units, input_shape=(None, dimension)))
                    model.add(Dense(1, activation='sigmoid'))
                    model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                    print(model.summary())
                    model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

                    # Padding and Masking
                    special_value = -10.0
                    max_seq_len = max(seq_lens)
                    Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
                    for s, x in enumerate(X):
                    seq_len = x.shape[0]
                    Xpad[s, 0:seq_len, :] = x
                    model2 = Sequential()
                    model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
                    model2.add(LSTM(lstm_units))
                    model2.add(Dense(1, activation='sigmoid'))
                    model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                    print(model2.summary())
                    model2.fit(Xpad, y, epochs=50, batch_size=32)


                    Extra notes



                    1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.





                    share|improve this answer











                    $endgroup$



                    The easiest way is to use Padding and Masking.



                    There are three general ways to handle variable-length sequences:



                    1. Padding and masking (which can be used for (3)),

                    2. Batch size = 1, and

                    3. Batch size > 1, with equi-length samples in each batch.

                    Padding and masking



                    In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then



                    X = [

                    [[1, 1.1],
                    [0.9, 0.95]], # sequence 1 (2 timestamps)

                    [[2, 2.2],
                    [1.9, 1.95],
                    [1.8, 1.85]], # sequence 2 (3 timestamps)

                    ]


                    will be converted to



                    X2 = [

                    [[1, 1.1],
                    [0.9, 0.95],
                    [-10, -10]], # padded sequence 1 (3 timestamps)

                    [[2, 2.2],
                    [1.9, 1.95],
                    [1.8, 1.85]], # sequence 2 (3 timestamps)
                    ]


                    This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.



                    For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.



                    model.add(LSTM(units, input_shape=(None, dimension)))


                    this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).



                    I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.



                    model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))
                    model.add(LSTM(lstm_units))


                    where first dimension of input_shape in Masking is again None to allow batches with different lengths.



                    Here is the code for cases (1) and (2):



                    from keras import Sequential
                    from keras.utils import Sequence
                    from keras.layers import LSTM, Dense, Masking
                    import numpy as np


                    class MyBatchGenerator(Sequence):
                    'Generates data for Keras'
                    def __init__(self, X, y, batch_size=1, shuffle=True):
                    'Initialization'
                    self.X = X
                    self.y = y
                    self.batch_size = batch_size
                    self.shuffle = shuffle
                    self.on_epoch_end()

                    def __len__(self):
                    'Denotes the number of batches per epoch'
                    return int(np.floor(len(self.y)/self.batch_size))

                    def __getitem__(self, index):
                    return self.__data_generation(index)

                    def on_epoch_end(self):
                    'Shuffles indexes after each epoch'
                    self.indexes = np.arange(len(self.y))
                    if self.shuffle == True:
                    np.random.shuffle(self.indexes)

                    def __data_generation(self, index):
                    Xb = np.empty((self.batch_size, *X[index].shape))
                    yb = np.empty((self.batch_size, *y[index].shape))
                    # naively use the same sample over and over again
                    for s in range(0, self.batch_size):
                    Xb[s] = X[index]
                    yb[s] = y[index]
                    return Xb, yb


                    # Parameters
                    N = 1000
                    halfN = int(N/2)
                    dimension = 2
                    lstm_units = 3

                    # Data
                    np.random.seed(123) # to generate the same numbers
                    # create sequence lengths between 1 to 10
                    seq_lens = np.random.randint(1, 10, halfN)
                    X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
                    y_zero = np.zeros((halfN, 1))
                    X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
                    y_one = np.ones((halfN, 1))
                    p = np.random.permutation(N) # to shuffle zero and one classes
                    X = np.concatenate((X_zero, X_one))[p]
                    y = np.concatenate((y_zero, y_one))[p]

                    # Batch = 1
                    model = Sequential()
                    model.add(LSTM(lstm_units, input_shape=(None, dimension)))
                    model.add(Dense(1, activation='sigmoid'))
                    model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                    print(model.summary())
                    model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

                    # Padding and Masking
                    special_value = -10.0
                    max_seq_len = max(seq_lens)
                    Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
                    for s, x in enumerate(X):
                    seq_len = x.shape[0]
                    Xpad[s, 0:seq_len, :] = x
                    model2 = Sequential()
                    model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
                    model2.add(LSTM(lstm_units))
                    model2.add(Dense(1, activation='sigmoid'))
                    model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
                    print(model2.summary())
                    model2.fit(Xpad, y, epochs=50, batch_size=32)


                    Extra notes



                    1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.






                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited 3 hours ago

























                    answered 15 hours ago









                    EsmailianEsmailian

                    2,670318




                    2,670318











                    • $begingroup$
                      Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
                      $endgroup$
                      – user145959
                      5 hours ago










                    • $begingroup$
                      @user145959 my pleasure! I added a note at the end.
                      $endgroup$
                      – Esmailian
                      3 hours ago
















                    • $begingroup$
                      Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
                      $endgroup$
                      – user145959
                      5 hours ago










                    • $begingroup$
                      @user145959 my pleasure! I added a note at the end.
                      $endgroup$
                      – Esmailian
                      3 hours ago















                    $begingroup$
                    Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
                    $endgroup$
                    – user145959
                    5 hours ago




                    $begingroup$
                    Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result?
                    $endgroup$
                    – user145959
                    5 hours ago












                    $begingroup$
                    @user145959 my pleasure! I added a note at the end.
                    $endgroup$
                    – Esmailian
                    3 hours ago




                    $begingroup$
                    @user145959 my pleasure! I added a note at the end.
                    $endgroup$
                    – Esmailian
                    3 hours ago

















                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48796%2fhow-to-feed-lstm-with-different-input-array-sizes%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу

                    Israel Cuprins Etimologie | Istorie | Geografie | Politică | Demografie | Educație | Economie | Cultură | Note explicative | Note bibliografice | Bibliografie | Legături externe | Meniu de navigaresite web oficialfacebooktweeterGoogle+Instagramcanal YouTubeInstagramtextmodificaremodificarewww.technion.ac.ilnew.huji.ac.ilwww.weizmann.ac.ilwww1.biu.ac.ilenglish.tau.ac.ilwww.haifa.ac.ilin.bgu.ac.ilwww.openu.ac.ilwww.ariel.ac.ilCIA FactbookHarta Israelului"Negotiating Jerusalem," Palestine–Israel JournalThe Schizoid Nature of Modern Hebrew: A Slavic Language in Search of a Semitic Past„Arabic in Israel: an official language and a cultural bridge”„Latest Population Statistics for Israel”„Israel Population”„Tables”„Report for Selected Countries and Subjects”Human Development Report 2016: Human Development for Everyone„Distribution of family income - Gini index”The World FactbookJerusalem Law„Israel”„Israel”„Zionist Leaders: David Ben-Gurion 1886–1973”„The status of Jerusalem”„Analysis: Kadima's big plans”„Israel's Hard-Learned Lessons”„The Legacy of Undefined Borders, Tel Aviv Notes No. 40, 5 iunie 2002”„Israel Journal: A Land Without Borders”„Population”„Israel closes decade with population of 7.5 million”Time Series-DataBank„Selected Statistics on Jerusalem Day 2007 (Hebrew)”Golan belongs to Syria, Druze protestGlobal Survey 2006: Middle East Progress Amid Global Gains in FreedomWHO: Life expectancy in Israel among highest in the worldInternational Monetary Fund, World Economic Outlook Database, April 2011: Nominal GDP list of countries. Data for the year 2010.„Israel's accession to the OECD”Popular Opinion„On the Move”Hosea 12:5„Walking the Bible Timeline”„Palestine: History”„Return to Zion”An invention called 'the Jewish people' – Haaretz – Israel NewsoriginalJewish and Non-Jewish Population of Palestine-Israel (1517–2004)ImmigrationJewishvirtuallibrary.orgChapter One: The Heralders of Zionism„The birth of modern Israel: A scrap of paper that changed history”„League of Nations: The Mandate for Palestine, 24 iulie 1922”The Population of Palestine Prior to 1948originalBackground Paper No. 47 (ST/DPI/SER.A/47)History: Foreign DominationTwo Hundred and Seventh Plenary Meeting„Israel (Labor Zionism)”Population, by Religion and Population GroupThe Suez CrisisAdolf EichmannJustice Ministry Reply to Amnesty International Report„The Interregnum”Israel Ministry of Foreign Affairs – The Palestinian National Covenant- July 1968Research on terrorism: trends, achievements & failuresThe Routledge Atlas of the Arab–Israeli conflict: The Complete History of the Struggle and the Efforts to Resolve It"George Habash, Palestinian Terrorism Tactician, Dies at 82."„1973: Arab states attack Israeli forces”Agranat Commission„Has Israel Annexed East Jerusalem?”original„After 4 Years, Intifada Still Smolders”From the End of the Cold War to 2001originalThe Oslo Accords, 1993Israel-PLO Recognition – Exchange of Letters between PM Rabin and Chairman Arafat – Sept 9- 1993Foundation for Middle East PeaceSources of Population Growth: Total Israeli Population and Settler Population, 1991–2003original„Israel marks Rabin assassination”The Wye River Memorandumoriginal„West Bank barrier route disputed, Israeli missile kills 2”"Permanent Ceasefire to Be Based on Creation Of Buffer Zone Free of Armed Personnel Other than UN, Lebanese Forces"„Hezbollah kills 8 soldiers, kidnaps two in offensive on northern border”„Olmert confirms peace talks with Syria”„Battleground Gaza: Israeli ground forces invade the strip”„IDF begins Gaza troop withdrawal, hours after ending 3-week offensive”„THE LAND: Geography and Climate”„Area of districts, sub-districts, natural regions and lakes”„Israel - Geography”„Makhteshim Country”Israel and the Palestinian Territories„Makhtesh Ramon”„The Living Dead Sea”„Temperatures reach record high in Pakistan”„Climate Extremes In Israel”Israel in figures„Deuteronom”„JNF: 240 million trees planted since 1901”„Vegetation of Israel and Neighboring Countries”Environmental Law in Israel„Executive branch”„Israel's election process explained”„The Electoral System in Israel”„Constitution for Israel”„All 120 incoming Knesset members”„Statul ISRAEL”„The Judiciary: The Court System”„Israel's high court unique in region”„Israel and the International Criminal Court: A Legal Battlefield”„Localities and population, by population group, district, sub-district and natural region”„Israel: Districts, Major Cities, Urban Localities & Metropolitan Areas”„Israel-Egypt Relations: Background & Overview of Peace Treaty”„Solana to Haaretz: New Rules of War Needed for Age of Terror”„Israel's Announcement Regarding Settlements”„United Nations Security Council Resolution 497”„Security Council resolution 478 (1980) on the status of Jerusalem”„Arabs will ask U.N. to seek razing of Israeli wall”„Olmert: Willing to trade land for peace”„Mapping Peace between Syria and Israel”„Egypt: Israel must accept the land-for-peace formula”„Israel: Age structure from 2005 to 2015”„Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition”10.1016/S0140-6736(15)61340-X„World Health Statistics 2014”„Life expectancy for Israeli men world's 4th highest”„Family Structure and Well-Being Across Israel's Diverse Population”„Fertility among Jewish and Muslim Women in Israel, by Level of Religiosity, 1979-2009”„Israel leaders in birth rate, but poverty major challenge”„Ethnic Groups”„Israel's population: Over 8.5 million”„Israel - Ethnic groups”„Jews, by country of origin and age”„Minority Communities in Israel: Background & Overview”„Israel”„Language in Israel”„Selected Data from the 2011 Social Survey on Mastery of the Hebrew Language and Usage of Languages”„Religions”„5 facts about Israeli Druze, a unique religious and ethnic group”„Israël”Israel Country Study Guide„Haredi city in Negev – blessing or curse?”„New town Harish harbors hopes of being more than another Pleasantville”„List of localities, in alphabetical order”„Muncitorii români, doriți în Israel”„Prietenia româno-israeliană la nevoie se cunoaște”„The Higher Education System in Israel”„Middle East”„Academic Ranking of World Universities 2016”„Israel”„Israel”„Jewish Nobel Prize Winners”„All Nobel Prizes in Literature”„All Nobel Peace Prizes”„All Prizes in Economic Sciences”„All Nobel Prizes in Chemistry”„List of Fields Medallists”„Sakharov Prize”„Țara care și-a sfidat "destinul" și se bate umăr la umăr cu Silicon Valley”„Apple's R&D center in Israel grew to about 800 employees”„Tim Cook: Apple's Herzliya R&D center second-largest in world”„Lecții de economie de la Israel”„Land use”Israel Investment and Business GuideA Country Study: IsraelCentral Bureau of StatisticsFlorin Diaconu, „Kadima: Flexibilitate și pragmatism, dar nici un compromis în chestiuni vitale", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 71-72Florin Diaconu, „Likud: Dreapta israeliană constant opusă retrocedării teritoriilor cureite prin luptă în 1967", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 73-74MassadaIsraelul a crescut in 50 de ani cât alte state intr-un mileniuIsrael Government PortalIsraelIsraelIsraelmmmmmXX451232cb118646298(data)4027808-634110000 0004 0372 0767n7900328503691455-bb46-37e3-91d2-cb064a35ffcc1003570400564274ge1294033523775214929302638955X146498911146498911

                    Кастелфранко ди Сопра Становништво Референце Спољашње везе Мени за навигацију43°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.5588543°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.558853179688„The GeoNames geographical database”„Istituto Nazionale di Statistica”проширитиууWorldCat156923403n850174324558639-1cb14643287r(подаци)