티스토리 뷰
model = Sequential()
model.add(Conv2D(64, kernel_size=(3,3), padding = 'same', input_shape=(28, 28, 1 ) ))
model.add(MaxPooling2D()) #(None, 14, 14, 64)
model.add(Conv2D(32, (3,3))) #(None, 12, 12, 32)
model.add(Conv2D(7, (3,3))) #(None, 10, 10, 7)
model.add(Flatten()) #(None, 700)
model.add(Dense(100, activation='relu')) #(None, 100)
model.add(Reshape(target_shape=(100,1))) #(None, 100, 1)
model.add(Conv1D(10,3)) #(None, 98, 10)
model.add(LSTM(16)) #(None, 16)
model.add(Dense(32, activation="relu")) #(None, 32)
model.add(Dense(32, activation="relu")) #(None, 32)
model.add(Dense(10, activation='softmax')) #(None, 10)
model = Sequential() model.add(Conv2D(64, kernel_size=(3,3), padding = 'same', input_shape=(28, 28, 1 ) )) model.add(MaxPooling2D()) #(None, 14, 14, 64) model.add(Conv2D(32, (3,3))) #(None, 12, 12, 32) model.add(Conv2D(7, (3,3))) #(None, 10, 10, 7) model.add(Flatten()) #(None, 700) model.add(Dense(100, activation='relu')) #(None, 100) model.add(Reshape(target_shape=(100,1))) #(None, 100, 1) model.add(Conv1D(10,3)) #(None, 98, 10) *padding 처리(X) model.add(LSTM(16)) #(None, 16) model.add(Dense(32, activation="relu")) #(None, 32) model.add(Dense(32, activation="relu")) #(None, 32) model.add(Dense(10, activation='softmax')) #(None, 10) |
reshape 사용 시
- 순서, 내용은 바뀌지 않는다.
- 연산량 (X)
'Artificial Intelligence > Deep Learning' 카테고리의 다른 글
[DL] Data Augmentation(2) (0) | 2022.07.21 |
---|---|
[DL] Data Augmentation(1) (0) | 2022.07.20 |
[DL] bidirectional RNN 등장배경과 파라미터 카운팅 (0) | 2022.07.14 |
[DL] RNN / LSTM / GRU (0) | 2022.07.13 |
[DL] LSTM의 return_sequences 사용하는 이유는? (0) | 2022.07.13 |