* feat(tools): add seed/solution restore script * chore(curriculum): remove empty sections' markers * chore(curriculum): add seed + solution to Chinese * chore: remove old formatter * fix: update getChallenges parse translated challenges separately, without reference to the source * chore(curriculum): add dashedName to English * chore(curriculum): add dashedName to Chinese * refactor: remove unused challenge property 'name' * fix: relax dashedName requirement * fix: stray tag Remove stray `pre` tag from challenge file. Signed-off-by: nhcarrigan <nhcarrigan@gmail.com> Co-authored-by: nhcarrigan <nhcarrigan@gmail.com>
965 B
965 B
id, challengeType, videoId, dashedName
id | challengeType | videoId | dashedName |
---|---|---|---|
5e8f2f13c4cdbe86b5c72da1 | 11 | 32WBFS7lfsw | natural-language-processing-with-rnns-building-the-model |
--question--
--text--
Fill in the blanks below to complete the build_model
function:
def build_mode(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size,
embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.__A__(rnn_units,
return_sequences=__B__,
recurrent_initializer='glorot_uniform),
tf.keras.layers.Dense(__C__)
])
__D__
--answers--
A: ELU
B: True
C: vocab_size
D: return model
A: LSTM
B: False
C: batch_size
D: return model
A: LSTM
B: True
C: vocab_size
D: return model
--video-solution--
3