How to use additional features along with word embeddings in Keras ?
Consider having a separate feedforward network that takes in those features and outputs some n dimensional vector.
time_independent = Input(shape=(num_features,))
dense_1 = Dense(200, activation='tanh')(time_independent)
dense_2 = Dense(300, activation='tanh')(dense_1)
Firstly, please use keras' functional API to do something like this.
You would then either pass this in as the hidden state of the LSTM, or you can concatenate it with every word embedding so that the lstm sees it at every timestep. In the latter case, you would want to drastically reduce the dimensionality of the network.
If you need an example, let me know.
I wrote about how to do this in keras. It's basically a functional multiple input model, which concatenates both feature vectors like this:
nlp_input = Input(shape=(seq_length,), name='nlp_input')
meta_input = Input(shape=(10,), name='meta_input')
emb = Embedding(output_dim=embedding_size, input_dim=100, input_length=seq_length)(nlp_input)
nlp_out = Bidirectional(LSTM(128))(emb)
x = concatenate([nlp_out, meta_input])
x = Dense(classifier_neurons, activation='relu')(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[nlp_input , meta_input], outputs=[x])
You want to add more input layers which is not possible with Sequential Model, you have to go for functional model
from keras.models import Model
which allows you to have multiple inputs and indirect connections.
embed = Embedding(word_index, 300, weights=[embedding_matrix], input_length=70, trainable=False)
lstm = LSTM(300, dropout=0.3, recurrent_dropout=0.3)(embed)
agei = Input(shape=(1,))
conc = Concatenate()(lstm, agei)
drop = Dropout(0.6)(conc)
dens = Dense(1)(drop)
acti = Activation('sigmoid')(dens)
model = Model([embed, agei], acti)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics['accuracy'])
You cannot concatenate before LSTM layer as it doesn't make sense and also you will have 3D Tensor after embedding layer and input is a 2D Tensor.