메뉴 건너뛰기
.. 내서재 .. 알림
소속 기관/학교 인증
인증하면 논문, 학술자료 등을  무료로 열람할 수 있어요.
한국대학교, 누리자동차, 시립도서관 등 나의 기관을 확인해보세요
(국내 대학 90% 이상 구독 중)
로그인 회원가입 고객센터 ENG
주제분류

추천
검색
질문

논문 기본 정보

자료유형
학술저널
저자정보
Yan Shi (University of Sanya)
저널정보
대한전자공학회 IEIE Transactions on Smart Processing & Computing IEIE Transactions on Smart Processing & Computing Vol.12 No.1
발행연도
2023.2
수록면
55 - 63 (9page)
DOI
10.5573/IEIESPC.2023.12.1.55

이용수

표지
📌
연구주제
📖
연구배경
🔬
연구방법
🏆
연구결과
AI에게 요청하기
추천
검색
질문

초록· 키워드

오류제보하기
In recent years, due to the sudden outbreak of public health events, online teaching has become a mainstream teaching approach, and the number of teaching videos has increased rapidly. Therefore, extracting active image information from videos is of great importance in understanding video. This research proposes extracting image features from the spatiotemporal dimension based on deep learning, usinga spatiotemporal network for action recognition of skeletal action, and building a CSTGAT model based on a convolutional neural network. The experimental results show that the CSTGAT model has an accuracy of 98.47%, a precision rate of 97.43%, and a recall rate of 71.65% after being trained by the convolutional neural network. Furthermore, it only needs 217 iterations to achieve stable target convergence. After 100 tests, the F1 value of the CSTGAT model was 96.83%. In summary, the proposed model has high accuracy, a comprehensive query rate, and good model expressiveness. This model could provide a solution for intelligent longdistance interaction between a human and a machine and could be used in online teaching.

목차

Abstract
1. Introduction
2. Related Work
3. Construction of CSTGAT Model based on CNN
4. Performance Analysis of CSTGAT Model based on CNN
5. Conclusion
References

참고문헌 (21)

참고문헌 신청

함께 읽어보면 좋을 논문

논문 유사도에 따라 DBpia 가 추천하는 논문입니다. 함께 보면 좋을 연관 논문을 확인해보세요!

이 논문의 저자 정보

최근 본 자료

전체보기

댓글(0)

0

UCI(KEPA) : I410-ECN-0101-2023-569-000401336