Keras : How should I prepare input data for RNN?
this is a fast procedure to create 3D data for LSTN/RNN without loops and involving this simple function
def create_windows(data, window_shape, step = 1, start_id = None, end_id = None):
data = np.asarray(data)
data = data.reshape(-1,1) if np.prod(data.shape) == max(data.shape) else data
start_id = 0 if start_id is None else start_id
end_id = data.shape[0] if end_id is None else end_id
data = data[int(start_id):int(end_id),:]
window_shape = (int(window_shape), data.shape[-1])
step = (int(step),) * data.ndim
slices = tuple(slice(None, None, st) for st in step)
indexing_strides = data[slices].strides
win_indices_shape = ((np.array(data.shape) - window_shape) // step) + 1
new_shape = tuple(list(win_indices_shape) + list(window_shape))
strides = tuple(list(indexing_strides) + list(data.strides))
window_data = np.lib.stride_tricks.as_strided(data, shape=new_shape, strides=strides)
return np.squeeze(window_data, 1)
starting from this sample data:
n_sample = 2000
n_feat_inp = 6
n_feat_out = 1
X = np.asarray([np.arange(n_sample)]*n_feat_inp).T # (n_sample, n_feat_inp)
y = np.asarray([np.arange(n_sample)]*n_feat_out).T # (n_sample, n_feat_out)
if we want ONE step ahead forecast
look_back = 5
look_ahead = 1
X_seq = create_windows(X, window_shape = look_back, end_id = -look_ahead)
# X_seq.shape --> (n_sample - look_back, look_back, n_feat_inp)
y_seq = create_windows(y, window_shape = look_ahead, start_id = look_back)
# y_seq.shape --> (n_sample - look_back, look_ahead, n_feat_out)
example of generated data:
X_seq[0]: [[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[2, 2, 2, 2, 2, 2],
[3, 3, 3, 3, 3, 3],
[4, 4, 4, 4, 4, 4]]
y_seq[0]: [[5]]
if we want MULTI step ahead forecast
look_back = 5
look_ahead = 3
X_seq = create_windows(X, window_shape = look_back, end_id = -look_ahead)
# X_seq.shape --> (n_sample - look_back - look_ahead + 1, look_back, n_feat_inp)
y_seq = create_windows(y, window_shape = look_ahead, start_id = look_back)
# y_seq.shape --> (n_sample - look_back - look_ahead + 1, look_ahead, n_feat_out)
example of generated data:
X_seq[0]: [[0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1],
[2, 2, 2, 2, 2, 2],
[3, 3, 3, 3, 3, 3],
[4, 4, 4, 4, 4, 4]]
y_seq[0]: [[5],
[6],
[7]]
If you only want to predict the output using the most recent 5 inputs, there is no need to ever provide the full 600 time steps of any training sample. My suggestion would be to pass the training data in the following manner:
t=0 t=1 t=2 t=3 t=4 t=5 ... t=598 t=599
sample0 |---------------------|
sample0 |---------------------|
sample0 |-----------------
...
sample0 ----|
sample0 ----------|
sample1 |---------------------|
sample1 |---------------------|
sample1 |-----------------
....
....
sample6751 ----|
sample6751 ----------|
The total number of training sequences will sum up to
(600 - 4) * 6752 = 4024192 # (nb_timesteps - discarded_tailing_timesteps) * nb_samples
Each training sequence consists of 5 time steps. At each time step of every sequence you pass all 13 elements of the feature vector. Subsequently, the shape of the training data will be (4024192, 5, 13).
This loop can reshape your data:
input = np.random.rand(6752,600,13)
nb_timesteps = 5
flag = 0
for sample in range(input.shape[0]):
tmp = np.array([input[sample,i:i+nb_timesteps,:] for i in range(input.shape[1] - nb_timesteps + 1)])
if flag==0:
new_input = tmp
flag = 1
else:
new_input = np.concatenate((new_input,tmp))