yo soy poco confundido acerca de la salida de forma de keras capa. He creado un ejemplo de keras modelo y también se muestra el resumen.
numberOfLSTMcells=1
n_timesteps_in=129
n_features=61
inp =Input(shape=(n_timesteps_in, n_features))
lstm= LSTM(numberOfLSTMcells,return_sequences=True, return_state=False) (inp)
fc=Dense(64,activation='relu',name='hidden_layer')(lstm)
out=Dense(1,activation='sigmoid',name='last_layer')(fc)
model = Model(inputs=inp, outputs=out)
Resumen del modelo
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 129, 61)] 0
_________________________________________________________________
lstm_2 (LSTM) (None, 129, 1) 252
_________________________________________________________________
hidden_layer (Dense) (None, 129, 64) 128
_________________________________________________________________
last_layer (Dense) (None, 129, 1) 65
=================================================================
Total params: 445
Trainable params: 445
Non-trainable params: 0
Lo que yo creo que la forma de la última capa debe ser (None,64,1)
. Porque hidden_layers tiene 64 neuronas que se pasa como entrada a last_layer