메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색

논문 기본 정보

자료유형
학위논문
저자정보

김승현 (가톨릭대학교, 가톨릭대학교 대학원)

지도교수
강호철
발행연도
2023
저작권
가톨릭대학교 논문은 저작권에 의해 보호받습니다.

이용수76

표지
AI에게 요청하기
추천
검색

이 논문의 연구 히스토리 (2)

초록· 키워드

오류제보하기
본 논문에서는 합성곱 신경망과 어텐션 기법을 이용한 ResNet 기반 폐암 병리 이미지 분류 모델을 제안한다. ResNet의 아이디어인 입력을 출력으로 추가하는 Shortcut 구조로 Gradient Vanishing 문제를 해결할 뿐만 아니라 레이어가 깊이 쌓여 있는 경우에도 우수한 성능을 발휘한다. 본 발상을 바탕으로 배치 정규화 및 활성화 함수의 위치를 Convolution Layer 앞으로 이동하여 출력이 그대로 입력되는 Pre-Activation 구조를 사용하였다. 뿐만 아니라 1x1, 3x3, 1x1 Convolution Layer로 구성된 ResNet의 Bottleneck 구조에서 공간 방향으로 Convolution 연산을 수행하는 깊이별 컨볼루션(Depthwise Convolution)과 채널 방향으로 컨볼루션 연산을 수행하는 1x1 Convolution과 같이 사용하여 깊이별 분리 가능한 컨볼루션(Depthwise Separable Convolution)을 이용하였다. 또한, 채널 어텐션(Channel Attention)과 공간 어텐션(Spatial Attention)은 Bottleneck구조 이후 특정 특징을 추출하는 어텐션 기법(Attention Mechanism)으로 활용되었다. 마지막으로 배치 정규화 및 활성화 함수인 ReLU를 사용하였으며, 평균 풀링(Average Pooling) 및 활성화 함수(Activation Function)를 Sigmoid 함수로 사용한 Fully-Connected Layer로 모델을 구성하였다. 이 방법의 지표로 Accuracy, Precision, Recall 및 F1 Score 를 사용하여 각각 79.01%, 79.28%, 79.01%, 78.92%이다. 본 논문은 실험을 통해 제안한 방법이 기존 ResNet 기반 모델보다 낫다는 것을 보여준다.

목차

감사의 글
초록 ···························································································································· vii
Ⅰ. 서론 ······················································································································· 1
1.1 연구의 배경 및 목적 ················································································· 1
1.2 논문의 구성 ································································································· 3
Ⅱ. 이론적 배경 ········································································································· 5
2.1 잔차 학습 네트워크(Residual Learning Network) ····························· 5
2.2 어텐션 기법(Attention Mechanism) ······················································ 7
2.3 채널 어텐션(Channel Attention) ·························································· 9
2.3.1 SE-Net(Squeeze and Excitation Networks) ························· 9
2.3.2 ECA-Net(Efficient Channel Attention for Deep CNN)········ 10
2.4 공간 어텐션(Spatial Attention) ···························································· 11
2.4.1 Cbam(Convolutional block attention module) ························12
2.5 깊이별 분리 가능한 컨볼루션 ······························································· 13
Ⅲ. 개선 방법 ··········································································································· 15
3.1 성능 개선 ··································································································· 15
3.1.1 Input Stem 변경 ············································································ 15
3.1.2 Downsampling Block 변경 ··························································· 15
3.2 깊이별 컨볼루션과 어텐션 기법 적용 ···················································· 17
Ⅳ. 실험 결과 ··························································································· 20
4.1 데이터셋(Dataset) ··················································································· 20
4.2 학습 환경 ··································································································· 21
4.3 Class Activation Map(CAM) ································································ 21
4.4 기존 모델과의 성능비교 ········································································· 22
Ⅴ. 결론 ···················································································································· 26
Ⅵ. 참고 문헌 ·········································································································· 27
Ⅶ. 영문 논문제출서 ······························································································· 30
Ⅷ. 영문 인준서 ········································································································31
Ⅸ. 영문 초록 ···········································································································32

최근 본 자료

전체보기

댓글(0)

0