Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Prasad, Supriya Kumari | - |
dc.contributor.author | Ko, Young Bae | - |
dc.date.issued | 2022-01-01 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/36817 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85143257175&origin=inward | - |
dc.description.abstract | Perceiving human exercises from video clips or still pictures is a provoking mission because of issues like, changes in scale, perspective, lighting, and appearance of source images. Human action acknowledgment is a difficult time series order task. It includes anticipating the action of an individual in light of image sensor information and generally requires profound area mastery and techniques of image processing to accurately extract meaningful feature data from the crude information to fit an artificial intelligence model. Currently available models are exceptionally tedious and lack accuracy of classification result. So there is a need to plan a Human action acknowledgment model which can be accurate and can be utilized efficiently in present world applications. This model will not just be practical yet in addition will be a utility-based model that can be utilized in an enormous number of applications such as observing and caring home alone elderly people or monitoring any unattended patient in a hospital. In this proposed model, source video dataset is wisely prepared for a meaningful and concise feature extraction by techniques like optical flow and 2D spatial temporal feature extraction. Then, these features are fed to the model for training by a VGG-19 Algorithm to effectively increase the accuracy of the Human Activity Recognition model compared to the existing system. | - |
dc.description.sponsorship | ACKNOWLEDGMENT This work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (MSIT) (NRF-2020R1A2C1102284), and by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2022-2018-0-01431) supervised by the IITP (Institute for Information and Communications Technology Planning and Evaluation). | - |
dc.language.iso | eng | - |
dc.publisher | IEEE Computer Society | - |
dc.subject.mesh | Auto encoders | - |
dc.subject.mesh | BoW | - |
dc.subject.mesh | Features extraction | - |
dc.subject.mesh | HAR | - |
dc.subject.mesh | Human actions | - |
dc.subject.mesh | Human activity recognition | - |
dc.subject.mesh | ML | - |
dc.subject.mesh | Source images | - |
dc.subject.mesh | VGG i9 | - |
dc.subject.mesh | Video-clips | - |
dc.title | Deep Learning Based Human Activity Recognition With Improved Accuracy | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2022.10.19. ~ 2022.10.21. | - |
dc.citation.conferenceName | 13th International Conference on Information and Communication Technology Convergence, ICTC 2022 | - |
dc.citation.edition | ICTC 2022 - 13th International Conference on Information and Communication Technology Convergence: Accelerating Digital Transformation with ICT Innovation | - |
dc.citation.endPage | 1495 | - |
dc.citation.startPage | 1492 | - |
dc.citation.title | International Conference on ICT Convergence | - |
dc.citation.volume | 2022-October | - |
dc.identifier.bibliographicCitation | International Conference on ICT Convergence, Vol.2022-October, pp.1492-1495 | - |
dc.identifier.doi | 10.1109/ictc55196.2022.9952720 | - |
dc.identifier.scopusid | 2-s2.0-85143257175 | - |
dc.identifier.url | http://ieeexplore.ieee.org/xpl/conferences.jsp | - |
dc.subject.keyword | AI | - |
dc.subject.keyword | Autoencoders | - |
dc.subject.keyword | BoW | - |
dc.subject.keyword | HAR | - |
dc.subject.keyword | ML | - |
dc.subject.keyword | VGG I9 | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | false | - |
dc.subject.subarea | Information Systems | - |
dc.subject.subarea | Computer Networks and Communications | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.