Ajou University repository

Deep Learning Based Human Activity Recognition With Improved Accuracy
Citations

SCOPUS

0

Citation Export

DC Field Value Language
dc.contributor.authorPrasad, Supriya Kumari-
dc.contributor.authorKo, Young Bae-
dc.date.issued2022-01-01-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/36817-
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85143257175&origin=inward-
dc.description.abstractPerceiving human exercises from video clips or still pictures is a provoking mission because of issues like, changes in scale, perspective, lighting, and appearance of source images. Human action acknowledgment is a difficult time series order task. It includes anticipating the action of an individual in light of image sensor information and generally requires profound area mastery and techniques of image processing to accurately extract meaningful feature data from the crude information to fit an artificial intelligence model. Currently available models are exceptionally tedious and lack accuracy of classification result. So there is a need to plan a Human action acknowledgment model which can be accurate and can be utilized efficiently in present world applications. This model will not just be practical yet in addition will be a utility-based model that can be utilized in an enormous number of applications such as observing and caring home alone elderly people or monitoring any unattended patient in a hospital. In this proposed model, source video dataset is wisely prepared for a meaningful and concise feature extraction by techniques like optical flow and 2D spatial temporal feature extraction. Then, these features are fed to the model for training by a VGG-19 Algorithm to effectively increase the accuracy of the Human Activity Recognition model compared to the existing system.-
dc.description.sponsorshipACKNOWLEDGMENT This work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (MSIT) (NRF-2020R1A2C1102284), and by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2022-2018-0-01431) supervised by the IITP (Institute for Information and Communications Technology Planning and Evaluation).-
dc.language.isoeng-
dc.publisherIEEE Computer Society-
dc.subject.meshAuto encoders-
dc.subject.meshBoW-
dc.subject.meshFeatures extraction-
dc.subject.meshHAR-
dc.subject.meshHuman actions-
dc.subject.meshHuman activity recognition-
dc.subject.meshML-
dc.subject.meshSource images-
dc.subject.meshVGG i9-
dc.subject.meshVideo-clips-
dc.titleDeep Learning Based Human Activity Recognition With Improved Accuracy-
dc.typeConference-
dc.citation.conferenceDate2022.10.19. ~ 2022.10.21.-
dc.citation.conferenceName13th International Conference on Information and Communication Technology Convergence, ICTC 2022-
dc.citation.editionICTC 2022 - 13th International Conference on Information and Communication Technology Convergence: Accelerating Digital Transformation with ICT Innovation-
dc.citation.endPage1495-
dc.citation.startPage1492-
dc.citation.titleInternational Conference on ICT Convergence-
dc.citation.volume2022-October-
dc.identifier.bibliographicCitationInternational Conference on ICT Convergence, Vol.2022-October, pp.1492-1495-
dc.identifier.doi10.1109/ictc55196.2022.9952720-
dc.identifier.scopusid2-s2.0-85143257175-
dc.identifier.urlhttp://ieeexplore.ieee.org/xpl/conferences.jsp-
dc.subject.keywordAI-
dc.subject.keywordAutoencoders-
dc.subject.keywordBoW-
dc.subject.keywordHAR-
dc.subject.keywordML-
dc.subject.keywordVGG I9-
dc.type.otherConference Paper-
dc.description.isoafalse-
dc.subject.subareaInformation Systems-
dc.subject.subareaComputer Networks and Communications-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Ko, Young-Bae Image
Ko, Young-Bae고영배
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.