[이 글은 "Do it 딥러닝 입문" 책을 보고 공부한 내용을 복습하고자 정리한 글입니다.]
모델 이해하기
Sequential - 케라스는 인공신경망 모델을 만들기 위한 Sequential 클래스를 제공함.
Dense - 모델에 포함된 완전 연결층
Sequential 모델에 층 추가하기
- model = Sequential([Dense(...)..])
- model.add(dense)
input_shape=(???,)
- 가중치의 shape을 결정한다. X값의 열의 개수가 784(벡터)이므로 예제에서는 784로 맞혀준다.
Densse 클래스 매개변수
unit
- 뉴런 개수를 매개변수로 지정.
activation
- 시그모이드, 소프트맥스 같은 경우는 activation='sigmoid or softmax'로 추가
kernel_initalizer
- 가중치를 규제하기 위한 매개변수
*머신러닝에서의 kernel과는 다른 용도로 사용된다.
최적화 알고리즘과 손실 함수 설정
model.compile(optimizer='sgd' , loss='categorical_crossentropy)
- 다중 분류의 최적화 알고리즘은 경사 하강법 알고리즘을 사용하고, 손실 함수는 크로스 엔트로피 손실 함수를 사용한다.
optimizer = 최적화 알고리즘
loss = 손실 함수
metrics=['accuracy']
- 학습된 모델의 과정 값들에 정확도를 추가한다.
모델 훈련
history = model.fit(x_train,~~~, epcohs=~, ~~~, validation_data =(x_val, y_val))
history
- history 객체의 history 딕셔너리에는 여러 측정 지표가 들어있다.
validation_data
-훈련 세트와 검증 세트로 나뉜 것중에 검증 세트를 튜플로 전달하는 매개변수이다.
MultiClassNetwork_For_Keras_2020_5_14
WARNING:tensorflow:From C:\Users\USER\Anaconda3\lib\site-packages\tensorflow\python\ops\math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Train on 48000 samples, validate on 12000 samples
Epoch 1/40
48000/48000 [==============================] - 3s 59us/sample - loss: 1.3736 - accuracy: 0.6530 - val_loss: 0.9597 - val_accuracy: 0.7324
Epoch 2/40
48000/48000 [==============================] - 3s 55us/sample - loss: 0.8392 - accuracy: 0.7465 - val_loss: 0.7479 - val_accuracy: 0.7582
Epoch 3/40
48000/48000 [==============================] - 3s 52us/sample - loss: 0.7076 - accuracy: 0.7668 - val_loss: 0.6617 - val_accuracy: 0.7756
Epoch 4/40
48000/48000 [==============================] - 3s 60us/sample - loss: 0.6435 - accuracy: 0.7807 - val_loss: 0.6118 - val_accuracy: 0.7895
Epoch 5/40
48000/48000 [==============================] - 3s 54us/sample - loss: 0.6028 - accuracy: 0.7934 - val_loss: 0.5766 - val_accuracy: 0.8023
Epoch 6/40
48000/48000 [==============================] - 2s 47us/sample - loss: 0.5733 - accuracy: 0.8034 - val_loss: 0.5510 - val_accuracy: 0.8094
Epoch 7/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.5513 - accuracy: 0.8107 - val_loss: 0.5305 - val_accuracy: 0.8173
Epoch 8/40
48000/48000 [==============================] - 2s 48us/sample - loss: 0.5335 - accuracy: 0.8177 - val_loss: 0.5150 - val_accuracy: 0.8208
Epoch 9/40
48000/48000 [==============================] - 3s 53us/sample - loss: 0.5192 - accuracy: 0.8222 - val_loss: 0.5014 - val_accuracy: 0.8266
Epoch 10/40
48000/48000 [==============================] - 3s 55us/sample - loss: 0.5072 - accuracy: 0.8251 - val_loss: 0.4933 - val_accuracy: 0.8275
Epoch 11/40
48000/48000 [==============================] - 3s 55us/sample - loss: 0.4973 - accuracy: 0.8281 - val_loss: 0.4809 - val_accuracy: 0.8317
Epoch 12/40
48000/48000 [==============================] - 3s 62us/sample - loss: 0.4881 - accuracy: 0.8307 - val_loss: 0.4739 - val_accuracy: 0.8331
Epoch 13/40
48000/48000 [==============================] - 3s 62us/sample - loss: 0.4805 - accuracy: 0.8333 - val_loss: 0.4660 - val_accuracy: 0.8360
Epoch 14/40
48000/48000 [==============================] - 3s 57us/sample - loss: 0.4738 - accuracy: 0.8358 - val_loss: 0.4603 - val_accuracy: 0.8387
Epoch 15/40
48000/48000 [==============================] - 3s 64us/sample - loss: 0.4676 - accuracy: 0.8379 - val_loss: 0.4559 - val_accuracy: 0.8384
Epoch 16/40
48000/48000 [==============================] - 3s 61us/sample - loss: 0.4619 - accuracy: 0.8400 - val_loss: 0.4499 - val_accuracy: 0.8412
Epoch 17/40
48000/48000 [==============================] - 3s 63us/sample - loss: 0.4568 - accuracy: 0.8416 - val_loss: 0.4454 - val_accuracy: 0.8432
Epoch 18/40
48000/48000 [==============================] - 3s 53us/sample - loss: 0.4520 - accuracy: 0.8430 - val_loss: 0.4414 - val_accuracy: 0.8447
Epoch 19/40
48000/48000 [==============================] - 2s 51us/sample - loss: 0.4476 - accuracy: 0.8443 - val_loss: 0.4361 - val_accuracy: 0.8453
Epoch 20/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.4435 - accuracy: 0.8458 - val_loss: 0.4336 - val_accuracy: 0.8462
Epoch 21/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.4395 - accuracy: 0.8471 - val_loss: 0.4294 - val_accuracy: 0.8487
Epoch 22/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.4360 - accuracy: 0.8481 - val_loss: 0.4258 - val_accuracy: 0.8484
Epoch 23/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.4327 - accuracy: 0.8488 - val_loss: 0.4234 - val_accuracy: 0.8504
Epoch 24/40
48000/48000 [==============================] - 3s 53us/sample - loss: 0.4291 - accuracy: 0.8503 - val_loss: 0.4197 - val_accuracy: 0.8531
Epoch 25/40
48000/48000 [==============================] - 3s 59us/sample - loss: 0.4261 - accuracy: 0.8513 - val_loss: 0.4177 - val_accuracy: 0.8513
Epoch 26/40
48000/48000 [==============================] - 3s 72us/sample - loss: 0.4234 - accuracy: 0.8522 - val_loss: 0.4143 - val_accuracy: 0.8543
Epoch 27/40
48000/48000 [==============================] - 3s 58us/sample - loss: 0.4204 - accuracy: 0.8525 - val_loss: 0.4117 - val_accuracy: 0.8560
Epoch 28/40
48000/48000 [==============================] - 3s 58us/sample - loss: 0.4174 - accuracy: 0.8537 - val_loss: 0.4096 - val_accuracy: 0.8559
Epoch 29/40
48000/48000 [==============================] - 3s 54us/sample - loss: 0.4151 - accuracy: 0.8555 - val_loss: 0.4072 - val_accuracy: 0.8577
Epoch 30/40
48000/48000 [==============================] - 2s 51us/sample - loss: 0.4125 - accuracy: 0.8554 - val_loss: 0.4053 - val_accuracy: 0.8586
Epoch 31/40
48000/48000 [==============================] - 2s 51us/sample - loss: 0.4102 - accuracy: 0.8555 - val_loss: 0.4040 - val_accuracy: 0.8593
Epoch 32/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.4079 - accuracy: 0.8571 - val_loss: 0.4020 - val_accuracy: 0.8581
Epoch 33/40
48000/48000 [==============================] - 2s 51us/sample - loss: 0.4057 - accuracy: 0.8576 - val_loss: 0.4022 - val_accuracy: 0.8584
Epoch 34/40
48000/48000 [==============================] - 3s 52us/sample - loss: 0.4034 - accuracy: 0.8583 - val_loss: 0.3988 - val_accuracy: 0.8617
Epoch 35/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.4015 - accuracy: 0.8588 - val_loss: 0.3972 - val_accuracy: 0.8621
Epoch 36/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.3994 - accuracy: 0.8598 - val_loss: 0.3955 - val_accuracy: 0.8609
Epoch 37/40
48000/48000 [==============================] - 3s 53us/sample - loss: 0.3974 - accuracy: 0.8602 - val_loss: 0.3938 - val_accuracy: 0.8612
Epoch 38/40
48000/48000 [==============================] - 2s 52us/sample - loss: 0.3955 - accuracy: 0.8611 - val_loss: 0.3920 - val_accuracy: 0.8620
Epoch 39/40
48000/48000 [==============================] - 3s 52us/sample - loss: 0.3937 - accuracy: 0.8622 - val_loss: 0.3911 - val_accuracy: 0.8633
Epoch 40/40
48000/48000 [==============================] - 3s 52us/sample - loss: 0.3918 - accuracy: 0.8626 - val_loss: 0.3891 - val_accuracy: 0.8637
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
Out[12]:
<matplotlib.image.AxesImage at 0x1bcadb03f88>
keras를 사용해도 정확도가 그렇게 많이 올라가지는 않는다. 이유는 이미지 데이터에 잘 맞는 모델이 아니기 때문이다. 저번 글에서와 같이 셔츠를 가방으로 분류했다.
다음 해볼 실습은 이미지 분류에 효과적인 합성곱 신경망을 통해 모델을 구현하고 직접 셔츠를 분류해보겠다.