Updated on 2024/02/01

写真a

 
Nakazawa Atsushi
 
Organization
Faculty of Interdisciplinary Science and Engineering in Health Systems Professor
Position
Professor
Contact information
メールアドレス
External link

Degree

  • Engneering ( Osaka University )

Research Interests

  • 動作解析

  • デジタルアーカイブ

  • Distributed Computer Vision

  • Human Motion Analysis

  • Image Sensing

  • 分散視覚システム

  • Digital Archive

  • 画像計測

Research Areas

  • Informatics / Perceptual information processing

Education

  • Osaka University   基礎工学研究科   システム人間系専攻

    - 2001

      More details

    Country: Japan

    researchmap

  • Osaka University    

    - 2001

      More details

  • Osaka University   基礎工学部   システム工学科

    - 1997

      More details

    Country: Japan

    researchmap

  • Osaka University    

    - 1997

      More details

Research History

  • Okayama University   Graduate School of Interdisciplinary Science and Engineering in Health Systems   Professor

    2023.4

      More details

  • 大阪大学情報科学研究科(サイバーメディアセンター兼担)講師.

    2003

      More details

  • 科学技術振興事業団研究員(東京大学生産技術研究所),

    2001

      More details

  • engineering science, osaka university at 2001.

    2001

      More details

  • デジタルデータを用いた認識および解析,

      More details

  • motion recognition and analysis, distributed vision

      More details

  • systems and motion planning for humanoid robots.

      More details

  • 有形・無形文化財のデジタル化,無形文化財(舞踊動作)の

      More details

  • Katsushi Ikeuchi's laboratory for two years.

      More details

  • During this term, he is sub-leader of digitizing cultural

      More details

  • Dr. Atsushi Nakazawa graduated the department of

      More details

  • Then he worked as a Post Doctoral researcher at Prof.

      More details

  • heritages team. Currently, he is a assistant professor of

      More details

  • Osaka University   サイバーメディアセンター

      More details

  • interest is understanding human motions such as human

      More details

  • への適用分野に従事する.

      More details

  • Osaka University Cybermedia Center, Informedia Education Division   Lecturer

      More details

  • ヒューマノイドロボットの動作獲得等の研究に従事.

      More details

  • バーチャルリアリティ,三次元計測,人間動作の解析とロボット

      More details

▼display all

Professional Memberships

▼display all

 

Papers

  • Phase Randomization: A data augmentation for domain adaptation in human action recognition

    Yu Mitsuzumi, Go Irie, Akisato Kimura, Atsushi Nakazawa

    Pattern Recognition   146   110051 - 110051   2024.2

     More details

    Publishing type:Research paper (scientific journal)   Publisher:Elsevier BV  

    DOI: 10.1016/j.patcog.2023.110051

    researchmap

  • Capturing Contact Surfaces by a Frustrated Total Internal Reflection System using a Curved Plate for Analysis of the Beginning of Humanitude’s Touching Motions Reviewed

    41 ( 10 )   2023.10

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

    researchmap

  • Behavioural changes in the interaction between child with autism spectrum disorder and mother through the Comprehensive Care Humanitude™ intervention Reviewed

    Miyuki Iwamoto, Atsushi Nakazawa, Miwako Honda, Sakiko Yoshikawa, Toshihiro Kato, Yves Gineste

    Proceedings of the Annual Meeting of the Cognitive Science Society   45 ( 45 )   2023.7

     More details

  • Behavioral and neural underpinnings of empathic characteristics in a Humanitude-care expert

    Wataru Sato, Atsushi Nakazawa, Sakiko Yoshikawa, Takanori Kochiyama, Miwako Honda, Yves Gineste

    Frontiers in Medicine   10   2023.5

     More details

    Publishing type:Research paper (scientific journal)   Publisher:Frontiers Media SA  

    Background

    Humanitude approaches have shown positive effects in elderly care. However, the behavioral and neural underpinnings of empathic characteristics in Humanitude-care experts remain unknown.

    Methods

    We investigated the empathic characteristics of a Humanitude-care expert (YG) and those of age-, sex-, and race-matched controls (n = 13). In a behavioral study, we measured subjective valence and arousal ratings and facial electromyography (EMG) of the corrugator supercilii and zygomatic major muscles while participants passively observed dynamic facial expressions associated with anger and happiness and their randomized mosaic patterns. In a functional magnetic resonance imaging (MRI) study, we measured brain activity while participants passively observed the same dynamic facial expressions and mosaics. In a structural MRI study, we acquired structural MRI data and analyzed gray matter volume.

    Results

    Our behavioral data showed that YG experienced higher subjective arousal and showed stronger facial EMG activity congruent with stimulus facial expressions compared with controls. The functional MRI data demonstrated that YG showed stronger activity in the ventral premotor cortex (PMv; covering the precentral gyrus and inferior frontal gyrus) and posterior middle temporal gyrus in the right hemisphere in response to dynamic facial expressions versus dynamic mosaics compared with controls. The structural MRI data revealed higher regional gray matter volume in the right PMv in YG than in controls.

    Conclusion

    These results suggest that Humanitude-care experts have behavioral and neural characteristics associated with empathic social interactions.

    DOI: 10.3389/fmed.2023.1059203

    researchmap

  • Augmented reality-based affective training for improving care communication skill and empathy. International journal

    Atsushi Nakazawa, Miyuki Iwamoto, Ryo Kurazume, Masato Nunoi, Masaki Kobayashi, Miwako Honda

    PloS one   18 ( 7 )   e0288175   2023

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    It is important for caregivers of people with dementia (PwD) to have good patient communication skills as it has been known to reduce the behavioral and psychological symptoms of dementia (BPSD) of PwD as well as caregiver burnout. However, acquiring such skills often requires one-on-one affective training, which can be costly. In this study, we propose affective training using augmented reality (AR) for supporting the acquisition of such skills. The system uses see-through AR glasses and a nursing training doll to train the user in both practical nursing skills and affective skills such as eye contact and patient communication. The experiment was conducted with 38 nursing students. The participants were assigned to either the Doll group, which only used a doll for training, or the AR group, which used both a doll and the AR system. The results showed that eye contact significantly increased and the face-to-face distance and angle decreased in the AR group, while the Doll group had no significant difference. In addition, the empathy score of the AR group significantly increased after the training. Upon analyzing the correlation between personality and changes of physical skills, we found a significant positive correlation between the improvement rate of eye contact and extraversion in the AR group. These results demonstrated that affective training using AR is effective for improving caregivers' physical skills and their empathy for their patients. We believe that this system will be beneficial not only for dementia caregivers but for anyone looking to improve their general communication skills.

    DOI: 10.1371/journal.pone.0288175

    PubMed

    researchmap

  • Simulated communication skills training program effects using augmented reality with real‐time feedback: A randomized control study

    Masaki Kobayashi, Miyuki Iwamoto, Saki Une, Ryo Kurazume, Atsushi Nakazawa, Miwako Honda

    Alzheimer's & Dementia   18 ( S8 )   2022.12

     More details

    Publishing type:Research paper (scientific journal)   Publisher:Wiley  

    Abstract

    Background

    While communication with dementia patients is challenging, known educational methods to improve communication skill for medical professionals are lacking. Our study aimed to assess the efficacy of simulated communication skills training for nursing students using augmented reality (AR) with real‐time feedback.

    Methods

    This is a randomized control study. Twenty‐five nursing students enrolled and learned standardized multimodal comprehensive care communication skills through a self‐learning material, which includes pathophysiology and clinical manifestations of dementia. Subsequently, participants were randomly assigned to one of the two learning systems—AR training or conventional nursing mannequin training. Each group had the training intervention to change the clothes of mannequins for one hour. The mannequin of the AR group was a superimposed computer graphic of an elderly woman’s face which reacted to participants' communication. Further, the communication skills of gaze and voice were evaluated by artificial intelligence (AI) and feedbacked to the participants’ head‐mounted display in real‐time. The conventional training group had the self‐training with nursing mannequins. All participants performed basic nursing care, including changing clothes and bed bath, to simulated patients before and after the training intervention, which were video‐recorded by an eye‐tracking camera and a fixed camera, then the videos were analyzed the communication skills by AI. Additionally, participants' empathy to patients was evaluated by the Jefferson Scale of Empathy‐Health Professions Students Version (JSPE‐HSP). The primary outcome was the proportion of time spent in eye contact during the care to simulated patients. The secondary outcome was the empathy score of JSPE‐HSP.

    Results

    After the training intervention, the proportion of time spent in eye contact in the AR training group significantly increased than the conventional training group (eye contact 13.6% versus 4.4%, P<0.05). Moreover, the JSPE‐HSP score increased from pre‐training to post‐training in the AR training group, whereas it decreased in the conventional training group [Mean (SD): 9.1 (6.6) versus ‐1.3 (3.8), P<0.01].

    Conclusions

    The simulated communication skills training for nursing students using augmented reality with real‐time feedback was associated with increased interactive communication skills to simulated patients and the empathy to patients.

    DOI: 10.1002/alz.062055

    researchmap

  • Ethical Considerations in User Modeling and Personalization (ECUMAP): ACM UMAP 2022 Tutorial.

    Jim Torresen, Atsushi Nakazawa

    UMAP   351 - 353   2022

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:ACM  

    DOI: 10.1145/3503252.3533721

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/um/umap2022.html#TorresenN22

  • Editorial: Interaction in robot-assistive elderly care. International journal

    Hidenobu Sumioka, Jim Torresen, Masahiro Shiomi, Liang-Kung Chen, Atsushi Nakazawa

    Frontiers in robotics and AI   9   1020103 - 1020103   2022

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    DOI: 10.3389/frobt.2022.1020103

    PubMed

    researchmap

  • Non-goal-driven eye movement after visual search task

    Ayumi Takemoto, Atsushi Nakazawa, Takatsune Kumada

    Journal of Eye Movement Research   15 ( 2 )   2022

     More details

    Publishing type:Research paper (scientific journal)  

    We investigated the functions and mechanisms of non-goal-driven eye movements, which are defined as eye movements induced when looking at visual stimuli on a display without engaging in a specific task or looking at a display without any visual stimuli or tasks. In our experiment, participants were asked to perform a visual search task on a display, which was followed by a rest period in which stimuli remained on the display or all stimuli were erased. During the rest period, the participants were asked to only look at the displays without engaging in any visual or cognitive tasks. We mainly analyzed the gaze-shift patterns in both task and rest periods, in which eye movements were classified in accordance with the angles of saccade directions in two consecutive saccades. The results indicate a significant difference between goal-driven eye movements, which were observed in the task period, and nongoal-driven eye movements, which were observed in the rest period. Scanning gaze-shift patterns dominated the task period, and backward and corrective-saccade-like gaze-shift patterns dominated the rest period. The gaze-shift pattern was affected by the task-difficulty during the task period. From these findings, we propose a model describing the oculomotor system in terms of goal-driven and non-goal-driven eye movements. In this model, the engagement levels of top-down and bottom-up control change along with task difficulty and are affected by the gaze-shift patterns during a visual search task. Decoupling of top-down control from the oculomotor system during a rest period induces backward saccades, resulting in fixation around the central part of a display. Therefore, we suggest that non-goaldriven eye movements play a crucial role in maintaining the readiness of the oculomotor system for the next search task.

    DOI: 10.16910/jemr.15.2.2

    Scopus

    researchmap

  • Drowsiness prevention using a social robot.

    Koki Hara, Ayumi Takemoto, Atsushi Nakazawa

    RO-MAN   603 - 609   2022

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    DOI: 10.1109/RO-MAN53752.2022.9900706

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ro-man/ro-man2022.html#HaraTN22

  • Facial expression translations preserving speaking content.

    Shiki Takeuchi, Atsushi Nakazawa

    26th International Conference on Pattern Recognition(ICPR)   1215 - 1221   2022

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    DOI: 10.1109/ICPR56361.2022.9956508

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/icpr/icpr2022.html#TakeuchiN22

  • Image Emotion Recognition Using Visual and Semantic Features Reflecting Emotional and Similar Objects

    Takahisa YAMAMOTO, Shiki TAKEUCHI, Atsushi NAKAZAWA

    IEICE Transactions on Information and Systems   E104.D ( 10 )   1691 - 1701   2021.10

     More details

    Publishing type:Research paper (scientific journal)   Publisher:Institute of Electronics, Information and Communications Engineers (IEICE)  

    DOI: 10.1587/transinf.2020edp7218

    researchmap

  • Technical Challenges for Smooth Interaction With Seniors With Dementia: Lessons From Humanitude™ International journal

    Hidenobu Sumioka, Masahiro Shiomi, Miwako Honda, Atsushi Nakazawa

    Frontiers in Robotics and AI   8   650906 - 650906   2021.6

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    Due to cognitive and socio-emotional decline and mental diseases, senior citizens, especially people with dementia (PwD), struggle to interact smoothly with their caregivers. Therefore, various care techniques have been proposed to develop good relationships with seniors. Among them, Humanitude is one promising technique that provides caregivers with useful interaction skills to improve their relationships with PwD, from four perspectives: face-to-face interaction, verbal communication, touch interaction, and helping care receivers stand up (physical interaction). Regardless of advances in elderly care techniques, since current social robots interact with seniors in the same manner as they do with younger adults, they lack several important functions. For example, Humanitude emphasizes the importance of interaction at a relatively intimate distance to facilitate communication with seniors. Unfortunately, few studies have developed an interaction model for clinical care communication. In this paper, we discuss the current challenges to develop a social robot that can smoothly interact with PwDs and overview the interaction skills used in Humanitude as well as the existing technologies.

    DOI: 10.3389/frobt.2021.650906

    Scopus

    PubMed

    researchmap

  • Evaluating imitation and rule-based behaviors of eye contact and blinking using an android for conversation

    Akishige Yuguchi, Tetsuya Sano, Gustavo Alfonso Garcia Ricardez, Jun Takamatsu, Atsushi Nakazawa, Tsukasa Ogasawara

    ADVANCED ROBOTICS   35 ( 15 )   907 - 918   2021.5

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:TAYLOR & FRANCIS LTD  

    In this paper, we investigate which approach to generate eye behaviors using an android robot makes what impressions on humans and clarify which are the important factors for attractive eye behaviors. Thus, we evaluate the human impression of eye behaviors displayed by an android robot while talking to a human by comparing the motions generated by the two approaches. In the first approach, we develop a method to imitate human eye behavior obtained from eye trackers. In the second approach, we control the eye direction, eye-contact duration, and eyeblinks based on the findings of human eye behavior in psychology and cognitive research. In the experiments, we asked male and female subjects to evaluate their impression by comparing the eye behaviors with an android that controls the eye-contact duration and eyeblinks by editing the imitation parameters or the rule-based behavior. In the experiments, we asked subjects to evaluate their impression of different eye behaviors displayed by an android. The eye behaviors were generated by modifying the imitation parameters or the rule-based behavior, which resulted in different eye-contact duration and eyeblink duration and timing. From the results, we found that (1) the imitation and rule-based behaviors showed no difference in terms of human-likeness, (2) the 3-second eye contact obtained better scores regardless of the imitation or rule-based eye behavior, (3) the subjects might regard the long eyeblinks as voluntary eyeblinks, with the intention to break eye contact, and (4) female subjects preferred short eyeblinks rather than long ones and considered that short eyeblinks might be one of the keys to making eye contact more suitable, in contrast to male subjects who preferred long eyeblinks.

    DOI: 10.1080/01691864.2021.1928544

    Web of Science

    Scopus

    researchmap

  • The Influence of the Other’s Gaze Direction, Facial Expressions and the Participant’s Posture on the Interpersonal Cognition on a Nursing Care Bed-Examination Using a Head-mounted Display-

    布井雅人, 吉川左紀子, 中澤篤志

    電子情報通信学会論文誌 A(Web)   J104-A ( 2 )   2021

  • Visual Place Recognition from Eye Reflection

    Yuki Ohshima, Kyosuke Maeda, Yusuke Edamoto, Atsushi Nakazawa

    IEEE Access   9   57364 - 57371   2021

     More details

    Publishing type:Research paper (scientific journal)  

    The cornea in the human eye reflects incoming environmental light, which means we can obtain information about the surrounding environment from the corneal reflection in facial images. In recent years, as the quality of consumer cameras increases, this has caused privacy concerns in terms of identifying the people around the subject or where the photo is taken. This paper investigates the security risk of eye corneal reflection images: specifically, visual place recognition from eye reflection images. First, we constructed two datasets containing pairs of scene and corneal reflection images. The first dataset is taken in a virtual environment. We showed pre-captured scene images in a 180-degree surrounding display system and took corneal reflections from subjects. The second dataset is taken in an outdoor environment. We developed several visual place recognition algorithms, including CNN-based image descriptors featuring a naive Siamese network and AFD-Net combined with entire image feature representations including VLAD and NetVLAD, and compared the results. We found that AFD-Net+VLAD performed the best and was able to accurately determine the scene in 73.08% of the top-five candidate scenes. These results demonstrate the potential to estimate the location at which a facial picture was taken, which simultaneously leads to a) positive applications such as the localization of a robot while conversing with persons and b) negative scenarios including the security risk of uploading facial images to the public.

    DOI: 10.1109/ACCESS.2021.3071406

    Scopus

    researchmap

  • GAN-based Style Transformation to Improve Gesture-recognition Accuracy

    Noeru Suzuki, Yuki Watanabe, Atsushi Nakazawa

    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies   4 ( 4 )   154 - 20   2020.12

     More details

    Publishing type:Research paper (scientific journal)  

    Gesture recognition and human-Activity recognition from multi-channel sensory data are important tasks in wearable and ubiquitous computing. In these tasks, increasing both the number of recognizable activity classes and the recognition accuracy is essential. However, this is usually an ill-posed problem because individual differences in the same gesture class may affect the discrimination of different gesture classes. One promising solution is to use personal classifiers, but this requires personal gesture samples for re-Training the classifiers. We propose a method of solving this issue that obtains personal gesture classifiers using few user gesture samples, thus, achieving accurate gesture recognition for an increased number of gesture classes without requiring extensive user calibration the novelty of our method is introducing a generative adversarial network (GAN)-based style transformer to 'generate' a user's gesture data the method synthesizes the gesture examples of the target class of a target user by transforming of a) gesture data into another class of the same user (intra-user transformation) or b) gesture data of the same class of another user (inter-user transformation) the synthesized data are then used to train the personal gesture classifier. We conducted comprehensive experiments using 1) different classifiers including SVM and CNN, 2) intra-and inter-user transformations, 3) various data-missing patterns, and 4) two different types of sensory data. Results showed that the proposed method had an increased performance. Specially, the CNN-based classifiers increased in average accuracy from 0.747 to 0.822 in the CheekInput dataset and from 0.856 to 0.899 in the USC-HAD dataset. Moreover, the experimental results with various data-missing conditions revealed a relation between the number of missing gesture classes and the accuracy of the existing and proposed methods, and we were able to clarify several advantages of the proposed method these results indicate the potential of considerably reducing the number of required training samples of target users.

    DOI: 10.1145/3432199

    Scopus

    researchmap

  • A Generative Self-Ensemble Approach to Simulated+Unsupervised Learning

    Yu Mitsuzum, Go Irie, Akisato Kimura, Atsushi Nakazawa

    Proceedings - International Conference on Image Processing, ICIP   2020-October   2151 - 2155   2020.10

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    In this paper, we consider Simulated and Unsupervised (S+U) learning which is a problem of learning from labeled synthetic and unlabeled real images. After translating the synthetic images to real ones, existing S+U learning methods use only the labeled synthetic images for training a predictor (e.g., a regression function) and ignore the target real images, which may result in unsatisfactory prediction performance. Our approach utilizes both synthetic and real images to train the predictor. The main idea of ours is to involve a self-ensemble learning framework into S+U learning. More specifically, we require the prediction results for an unlabeled real image to be consistent between 'teacher' and 'student' predictors, even after some perturbations are added to the image. Furthermore, aiming at generating diverse perturbations along the underlying data manifold, we introduce one-to-many image translation between synthetic and real images. Evaluation experiments on an appearance-based gaze estimation task demonstrate that the proposed ideas can improve the prediction accuracy and our full method can outperform existing S+U learning methods.

    DOI: 10.1109/ICIP40778.2020.9191100

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/icip/icip2020.html#MitsuzumiIKN20

  • First-person Video Analysis for Evaluating Skill Level in the Humanitude Tender-Care Technique

    Atsushi Nakazawa, Yu Mitsuzumi, Yuki Watanabe, Ryo Kurazume, Sakiko Yoshikawa, Miwako Honda

    Journal of Intelligent and Robotic Systems: Theory and Applications   98 ( 1 )   103 - 118   2020.4

     More details

    Publishing type:Research paper (scientific journal)  

    In this paper, we describe a wearable first-person video (FPV) analysis system for evaluating the skill levels of caregivers. This is a part of our project that aims to quantize and analyze the tender-care technique known as Humanitude by using wearable sensing and AI technology devices. Using our system, caregivers can evaluate and elevate their care levels by themselves. From the FPVs of care sessions taken by wearable cameras worn by caregivers, we obtained the 3D facial distance, pose and eye-contact states between caregivers and receivers by using facial landmark detection and deep neural network (DNN)-based eye contact detection. We applied statistical analysis to these features and developed algorithms that provide scores for tender-care skill. In experiments, we first evaluated the performance of our DNN-based eye contact detection by using eye contact datasets prepared from YouTube videos and FPVs that assume conversational scenes. We then performed skill evaluations by using Humanitude training scenes involving three novice caregivers, two Humanitude experts and seven middle-level students. The results showed that our eye contact detection outperformed existing methods and that our skill evaluations can estimate the care skill levels.

    DOI: 10.1007/s10846-019-01052-8

    Scopus

    researchmap

  • Real-time surgical needle detection using region-based convolutional neural networks

    Atsushi Nakazawa, Kanako Harada, Mamoru Mitsuishi, Pierre Jannin

    International Journal of Computer Assisted Radiology and Surgery   15 ( 1 )   41 - 47   2020.1

     More details

    Publishing type:Research paper (scientific journal)  

    Objective: Conventional surgical assistance and skill analysis for suturing mostly focus on the motions of the tools. As the quality of the suturing is determined by needle motions relative to the tissues, having knowledge of the needle motion would be useful for surgical assistance and skill analysis. As the first step toward demonstrating the usefulness of the knowledge of the needle motion, we developed a needle detection algorithm. Methods: Owing to the small needle size, attaching sensors to it is difficult. Therefore, we developed a real-time video-based needle detection algorithm using a region-based convolutional neural network. Results: Our method successfully detected the needle with an average precision of 89.2%. The needle was robustly detected even when the needle was heavily occluded by the tools and/or the blood vessels during microvascular anastomosis. However, there were some incorrect detections, including partial detection. Conclusion: To the best of our knowledge, this is the first time deep neural networks have been applied to real-time needle detection. In the future, we will develop a needle pose estimation algorithm using the predicted needle location toward computer-aided surgical assistance and surgical skill analysis.

    DOI: 10.1007/s11548-019-02050-9

    Scopus

    PubMed

    researchmap

    Other Link: https://dblp.uni-trier.de/db/journals/cars/cars15.html#NakazawaHMJ20

  • Eye Contact Detection from Third Person Video

    Yuki Ohshima, Atsushi Nakazawa

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   12047 LNCS   667 - 677   2020

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Springer  

    Eye contact is fundamental for human communication and social interactions; therefore much effort has been made to develop automated eye-contact detection using image recognition techniques. However, existing methods use first-person-videos (FPV) that need participants to equip wearable cameras. In this work, we develop an novel eye contact detection algorithm taken from normal viewpoint (third person video) assuming the scenes of conversations or social interactions. Our system have high affordability since it does not require special hardware or recording setups, moreover, can use pre-recorded videos such as Youtube and home videos. In designing algorithm, we first develop DNN-based one-sided gaze estimation algorithms which output the states whether the one subject looks at another. Afterwards, eye contact is found at the frame when the pair of one-sided gaze happens. To verify the proposed algorithm, we generate third-person eye contact video dataset using publicly available videos from Youtube. As the result, proposed algorithms performed 0.775 in precision and 0.671 in recall, while the existing method performed 0.484 in precision and 0.061 in recall, respectively.

    DOI: 10.1007/978-3-030-41299-9_52

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/acpr/acpr2019-2.html#OhshimaN19

  • Imitation of Human Eyeblinks and Nodding Using an Android Toward Attentive Listening

    YUGUCHI Akishige, TAKAMATSU Jun, NAKAZAWA Atsushi, OGASAWARA Tsukasa

    The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec)   2020   1P2-E16   2020

     More details

    Language:Japanese   Publisher:The Japan Society of Mechanical Engineers  

    Toward the achievement of natural attentive listening using an android, a nodding synchrony behavior with a human talker is an effective nonverbal behavior from the finding of several works. In addition, according to the related work, a human listener’s eyeblinks synchronizes with an android talker’s one in face-to-face communication. Here, we hypothesize if an android which is in the role of a listener imitates eyeblinks and nodding simultaneously a human’s one in face-to-face communication, the android can act more attentive listening behavior. In this paper, we propose a method to imitate human eyeblinks and nodding using an android to synchronize the human’s one. Through the experiment, we confirmed that the android can generate the imitation behaviors of human eyeblinks and nodding.

    DOI: 10.1299/jsmermd.2020.1p2-e16

    CiNii Article

    researchmap

  • An examination of gaze during conversation for designing culture-based robot behavior

    Louisa Hardjasa, Atsushi Nakazawa

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   12194 LNCS   475 - 488   2020

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Springer  

    Gaze behavior, including eye contact and gaze direction, is an essential component of non-verbal communication, helping to facilitate human-to-human conversation in ways that have often been thought of as universal and innate. However, these have been shown to be influenced partially by cultural norms and background, and despite this, the majority of social robots do not have any cultural-based non-verbal behaviors and several lack any directional gaze capabilities at all. This study aims to observe how different gaze behaviors manifest during conversation as a function of culture as well as exposure to other cultures, by examining differences in behaviors such as duration of direct gaze, duration and direction of averted gaze, and average number of shifts in gaze, with the objective of establishing a baseline of Japanese gaze behavior to be implemented into a social robot. Japanese subjects were found to have much more averted gaze during a task that involves thinking as opposed to a task focused on communication. Subjects with significant experience living overseas were found to have different directional gaze patterns from subjects with little to no overseas experience, implying that non-verbal behavior patterns can change with exposure to other cultures.

    DOI: 10.1007/978-3-030-49570-1_33

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2020-14.html#HardjasaN20

  • Comparison of Gaze Skills Between Expert and Novice in Elderly Care

    Miyuki Iwamoto, Atsushi Nakazawa

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   12208 LNCS   91 - 100   2020

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:Springer  

    It is known that a person, when ignored by others, causes various negative reactions and enhances aggressive behavior and Self-destructive behavior [1–3]. The same goes for elderly people with dementia, therefore, depending on the care may cause fear and confusion for elderly people with dementia [4]. At the same time, care for the elderly with dementia has a great mental and physical burden on caregivers. For this reason, the turnover rate has increased and it has become difficult to provide adequate care [5–8]. For these problems, as one of the dementia care is gaining attention Humanitude [4, 10]. Humanitude consists of four skills: “see”, “touch”, “speak”, and “stand” [9–11]. So, our research focus on “see”, one of the basic skills. A person’s gaze is generally directed to an object of interest or attention. The gaze is extremely useful information for estimating the mind of another person [3, 12, 13]. Therefore, we let a caregiver wear the first person camera, and four types of “seeing” behavior patterns during the oral care (care receiver → caregiver, care receiver ← caregiver, mutual gaze, none) measured. We compared the differences between Humanitude experts and novice. There was a large difference in the frequency and time of mutual gaze between expert and novice caregiver for care receiver. The act of matching the sight of eyes is an act of not ignoring the other person, indicating an interest in the care receiver, and it is considered that for the care receiver, the anxiety and fear during care are reduced.

    DOI: 10.1007/978-3-030-50249-2_7

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/hci/hci2020-28.html#IwamotoN20

  • Evaluating Imitation of Human Eye Contact and Blinking Behavior Using an Android for Human-like Communication

    Tetsuya Sano, Akishige Yuguchi, Gustavo Alfonso Garcia Ricardez, Jun Takamatsu, Atsushi Nakazawa, Tsukasa Ogasawara

    2019 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019   1 - 6   2019.10

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    The appearance of android robots is very similar to that of human beings. From their appearance, we expect that androids might provide us with high-level communication. The imitation of human behavior gives us the feeling of natural behavior even if we do not know what drives high-level communication. In this paper, we evaluate the imitation of human eye behavior by an android. We consider that the android imitates human eye behavior while explaining some research topic and a person acts as a listener. Then, we construct a method to imitate the eye behavior obtained from eye trackers. For the evaluation, we asked seventeen male subjects for their subjective evaluation and compared the imitation with an android that controlled eye-contact duration and eyeblinks by editing the imitation or programming rule-based behavior. From the results, we found out that 1) the rule-based behaviors kept human-likeness, 2) 3-second eye contact obtained better scores regardless of the imitation-based or rule-based eye behavior, and 3) the subjects might regard the longer eyeblinks as voluntary eyeblinks, with the intention to break eye contacts.

    DOI: 10.1109/RO-MAN46459.2019.8956387

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/ro-man/ro-man2019.html#SanoYRTNO19

  • First-person camera system to evaluate tender dementia-care skill

    Atsushi Nakazawa, Miwako Honda

    Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019   4435 - 4442   2019.10

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    In this paper, we describe a wearable first-person video (FPV) analysis system for evaluating the skill levels of tender dementia-care technique. Using this system, caregivers can evaluate and elevate their care levels by themselves using the systems' feedbacks. From the FPVs of care sessions taken by wearable cameras worn by caregivers, we obtained the 3D facial distance, pose and eye-contact states between caregivers and receivers by using facial landmark detection and deep neural network (DNN)-based eye contact detection. We applied statistical analysis to these features and developed algorithms that provide scores for tender-care skill. To find and confirm our idea, we conducted chronological study to observe the progression of tender care-skill learning using care learners. First, we took FPVs while care training scenes involving novice caregivers, tender-care experts and middle-level students, and found major behavioural differences among them. Second, we performed the same experiments for the participants before and after training sessions of the care. As the result, we found the same behavioural difference between 1) novices and experts and 2) novices before and after taking training sessions. These results indicate that our FPV-based behavior analysis can evaluate the skill progression of the tender dementia-care.

    DOI: 10.1109/ICCVW.2019.00544

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/iccvw/iccvw2019.html#NakazawaH19

  • HMD-based cover test system for the diagnosis of ocular misalignment

    Noriyuki Uchida, Kayoko Takatuka, Hisaaki Yamaba, Atsushi Nakazawa, Masayuki Mukunoki, Naonobu Okazaki

    Artificial Life and Robotics   24 ( 3 )   332 - 337   2019.9

     More details

    Publishing type:Research paper (scientific journal)  

    The diagnosis of ocular misalignment is difficult and needs examination by ophthalmologists and orthoptists. However, there are not enough qualified personnel to perform such diagnoses. The eye position check is in part systematized. With this check system, we can detect not only the symptoms but also the angle and the extent of strabismus. However, the types of strabismus that can be detected with this technique are limited to exotropia. The purpose of this study is to develop a simplified check system to screen at least the presence of strabismus apart from the type of strabismus or amount of ocular deviation. First, we digitalized the check process. Specifically, we digitized the elemental technology, i.e., the cover–uncover function, required for automation of the typical cover test for eye position check. Furthermore, we developed and implemented an abnormality determination process and evaluated the performance of the system through this experiment, the results of which indicated a higher detection capability than the conventional cover test performed by ophthalmologists and orthoptists.

    DOI: 10.1007/s10015-018-0520-4

    Scopus

    researchmap

  • Towards overhead semantic part segmentation of workers in factory environments from depth images using a FCN

    Masakazu Yoshimura, Murilo M. Marinho, Atsushi Nakazawa, Kanako Harada, Mamoru Mitsuishi, Takuya Maekawa, Yasuo Namioka

    2019 IEEE International Conference on Cyborg and Bionic Systems, CBS 2019   204 - 209   2019.9

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    In production lines, human workers assemble and/or inspect products in a predetermined process flow. Training new workers is a complex process with varying results. To track the workers motion in a way to better understand human skill, an overhead semantic part segmentation of workers is desired. For this purpose, in this work, we propose a fully-convolutional neural-network model paired with four proposed augmentation strategies. Artificial depth images were used as training data and the augmentation strategies were essential in aiding the network to generalize to the real images. The proposed method was evaluated in two tasks with different backgrounds: a part assembly task and a quality check task. We improved the F-measure by 12% in the part assembly task and 4% in the quality check task when compared to our previous work.

    DOI: 10.1109/CBS46900.2019.9114417

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/cbs/cbs2019.html#YoshimuraMNHMMN19

  • Robust Pupil Segmentation and Center Detection from Visible Light Images Using Convolutional Neural Network Reviewed

    Kazunari Kitazumi, Atsushi Nakazawa

    Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018   862 - 868   2019.1

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    In this paper, we present a robust pupil detection method from visible light (VL) images using a convolutional neural network (CNN). In contrast to existing pupil detection algorithms, our method does not require infrared (IR) illuminations and cameras, and robustly works even for the images of dark-brown irises (black eyes) and/or images including strong corneal reflections such as outdoor scenes. Thus it has potential application scenarios. Our method first detects an eye region from an input image and applies CNN-based pupil segmentation which has a composition and decomposition structure. To learn the relationship between visible eye images and pupil segments, we construct two datasets and augmentation algorithms. One dataset is based on an existing eye image dataset (UBIRIS.v2), and the other consists of images taken by ourselves by using a corneal imaging camera that can take eye images in mobile environments. Applying color and corneal reflection augmentations to these image datasets and using them for learning the CNN, we built a robust pupil segmentation neural network. The performance is evaluated in three ways. First, we evaluate the segmentation accuracy. Second, we evaluate the pupil center detection accuracy using GI4E facial image sets. Third, we developed an eye gaze tracking (EGT) algorithm that uses the pupil detection and evaluated its accuracy. From the result, the proposed method detects pupil centers more accuracy than the state-of-the arts, and shows similar EGT accuracy to the commercial systems that use IR-active lighting setups.

    DOI: 10.1109/SMC.2018.00154

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/smc/smc2018.html#KitazumiN18

  • Eye Contact Detection Algorithms Using Deep Learning and Generative Adversarial Networks Reviewed

    Yu Mitsuzumi, Atsushi Nakazawa

    Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018   3927 - 3931   2019.1

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Eye contact (mutual gaze) is a foundation of human communication and social interactions; therefore, it is studied in many fields such as psychology, social science, and medicine. Our group have been studied wearable vision-based eye contact detection techniques using a first person camera for the purpose of evaluating the gaze skills in the tender dementia care. In this work, we search for deep learning-based eye contact detection techniques from small number of labeled images. We implemented and tested two eye contact detection algorithms: naïve deep-learning-based algorithm and generative adversarial networks (GAN)-based semi supervised learning (SSL) algorithm. These methods are learned and verified by using Columbia Gaze Dataset, Facescrub and our original datasets. The results show the effectiveness and limitations of the deep-learning-based and GAN-based approaches. Interestingly, we found the bilateral difference of the accuracy of eye contact detection with respect to the facial pose with respect to the camera, which is expected to be caused by the learning datasets.

    DOI: 10.1109/SMC.2018.00666

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/smc/smc2018.html#MitsuzumiN18

  • Spatio-temporal eye contact detection combining CNN and LSTM

    Yuki Watanabe, Atsushi Nakazawa, Yu Mitsuzumi, Toyoaki Nishida

    PROCEEDINGS OF MVA 2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA)   1 - 7   2019

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    Eye contact (mutual gaze) is fundamental for human communication and social interactions; therefore, it is studied in many fields. To support the study of eye contact, much effort has been made to develop automated eye-contact detection using image recognition techniques. In recent years, convolutional neural network (CNN) based eye-contact detection techniques are becoming popular due to their performance; however, they mainly use single frame for recognition. Eye contact is a human communication behavior, so temporal information, such as temporal eye images and facial poses, is important to increase the accuracy of eye-contact detection. We incorporate temporal information into eye-contact detection by using temporal neural network structures that combine CNNs and long short-term memory (LSTM). We tested several network combinations of CNNs and LSTM and found the best solution that uses the outputs of CNNs as well as the cell state vectors of LSTM in the fully connected layers. We prepared two types of eye contact video datasets. One dataset is based on online videos, and the other was taken by a first-person camera in assumed conversational scenarios. The results show that our method is better than the approaches that use single frames. Namely, our method performs 0.8781, while the existing method (DeepEC) performed 0.8319, in F-1-score.

    DOI: 10.23919/MVA.2019.8757989

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/mva/mva2019.html#WatanabeNMN19

  • General Improvement Method of Specular Component Separation Using High-Emphasis Filter and Similarity Function

    Takahisa Yamamoto, Atsushi Nakazawa

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   7 ( 2 )   92 - 102   2019

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:INST IMAGE INFORMATION & TELEVISION ENGINEERS  

    Separating reflection components is a fundamental problem in computer vision and useful for many applications such as image quality. We propose a novel method that improves the accuracy of separating reflection components from a single image. Although several algorithms for separating reflection components have been proposed, our method can additionally improve the accuracy based on their results. First, we obtain diffuse and specular components by using an existing algorithm. Then, we apply a high-emphasis filter for each component. Since the responses of the high-emphasis filter where the separation fails become larger than the original values, we can detect erroneous pixels. Thus, we replace separation results of these erroneous pixels with results of other reference pixels from the image considering the similarity between the target and reference pixels. Experimental results show that our method can improve at most 13.61 dB in terms of the Peak Signal-to-Noise Ratio (PSNR).

    DOI: 10.3169/mta.7.92

    Web of Science

    Scopus

    researchmap

  • FASHION STYLE RECOGNITION USING COMPONENT-DEPENDENT CONVOLUTIONAL NEURAL NETWORKS

    Takahisa Yamamoto, Atsushi Nakazawa

    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   2019-September   3397 - 3401   2019

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    The fashion style recognition is important in online marketing applications. Several algorithms have been proposed, but their accuracy is still unsatisfactory. In this paper, we share our proposed method for creating an improved fashion style recognition algorithm, component-dependent convolutional neural networks (CD-CNNs). Given that a lot of fashion styles largely depend on the features of specific body parts or human body postures, first, we obtain images of the body parts and postures by using semantic segmentation and pose estimation algorithms; then, we pre-train CD-CNNs. We perform the classification by the concatenated outputs of CD-CNNs and a support vector machine (SVM). Experimental results using the HipsterWars and FashionStyle14 datasets prove that our method is effective and can improve classification accuracy, namely 85.3% for HipsterWars and 77.7% for FashionStyle14, while those of existing methods were 80.9% for HipsterWars and 72.0% for FashionStyle14.

    DOI: 10.1109/ICIP.2019.8803622

    Web of Science

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/icip/icip2019.html#YamamotoN19

  • 優しい介護インタラクションの計算的・脳科学的解明 Reviewed

    中澤 篤志

    ヒューマンインタフェースシンポジウム   2018.9

     More details

    Language:Japanese  

    researchmap

  • Development of a skill evaluation system for the camera assistant using an infant-sized laparoscopic box trainer

    Tetsuya Ishimaru, Kyoichi Deie, Tomoya Sakai, Hideyuki Satoh, Atsushi Nakazawa, Kanako Harada, Shinya Takazawa, Jun Fujishiro, Naohiko Sugita, Mamoru Mitsuishi, Tadashi Iwanaka

    Journal of Laparoendoscopic and Advanced Surgical Techniques   28 ( 7 )   906 - 911   2018.7

     More details

    Publishing type:Research paper (scientific journal)  

    Aims: Our aims were to develop a training system for camera assistants (CA), and evaluate participants' performance as CA. Methods: A questionnaire on essential requirements to be a good CA was administered to experts in pediatric endoscopic surgery. An infant-sized box trainer with several markers and lines inside was developed. Participants performed marker capturing and line-tracing tasks using a 5-mm 30° scope. A postexperimental questionnaire on the developed system was administered. The task completion time was measured. Results: The 5-point evaluation scale was used for each item in the questionnaire survey of experts. The abilities to maintain a horizontal line (mean score: 4.5) and to center the target in a specified rectangle on the monitor (4.5) as well as having a full understanding of the operative procedure (4.3) were ranked as highly important. Fifty-two participants, including 5 surgical residents, were enrolled in the evaluation experiment. The completion time of capturing the markers was significantly longer in the resident group than in the nonresident group (244 versus 124 seconds, P = .04), but that of tracing the lines was not significantly different between the groups. The postexperimental questionnaire showed that the participants felt that the line-tracing tasks (3.7) were more difficult than marker-capturing tasks (2.9). Conclusions: Being proficient in manipulating a camera and having adequate knowledge of operative procedures are essential requirements to be a good CA. The ability was different between the resident and nonresident groups even in a simple task such as marker capturing.

    DOI: 10.1089/lap.2017.0406

    Scopus

    PubMed

    researchmap

  • Computational Tender-Care Science: Computational and Cognitive Neuro-scientific Approaches for Understanding the Tender Care Reviewed

    Atsushi Nakazawa, Kyoto University, Japan, Ryo Kurazume, Kyushu University, Japan, Miwako Honda, Tokyo Medical Center, Sato Wataru, Kyoto University, Japan, Shogo Ishikawa, Shizuoka University, Japan, Sakiko Yoshikawa

    Workshop on Elderly Care Robotics – Technology and Ethics   2018.5

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    researchmap

  • Computational Tender-Care Science: Computational and Cognitive Neuroscientific Approaches for Understanding the Tender Care Reviewed

    Nakazawa A, Kurazume R, Honda M, Sato W, Ishikawa S, Yoshikawa S, Ito M

    IUI Workshop on Symbiotic Interaction and Harmonius Collaboration for Wisdom Computing   1   1 - 9   2018.3

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    researchmap

  • 深層学習を用いた可視光画像からの瞳孔検出と注視点推定への応用

    北角 一哲, 中澤 篤志, 西田 豊明

    電子情報通信学会技術研究報告   117 ( 392 )   93 - 99   2018.1

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

    researchmap

  • Towards robust needle segmentation and tracking in pediatric endoscopic surgery.

    Yujun Chen, Murilo M. Marinho, Yusuke Kurose, Atsushi Nakazawa, Kyoichi Deie, Kanako Harada, Mamoru Mitsuishi

    Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling   105762   2018

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:SPIE  

    DOI: 10.1117/12.2292418

    researchmap

    Other Link: https://dblp.uni-trier.de/db/conf/miigp/miigp2018.html#ChenMKNDHM18

  • Point of Gaze Estimation Using Corneal Surface Reflection and Omnidirectional Camera Image. Reviewed

    Taishi Ogawa, Atsushi Nakazawa, Toyoaki Nishida

    IEICE Trans. Inf. Syst.   101-D ( 5 )   1278 - 1287   2018

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:Institute of Electronics, Information and Communication, Engineers, IEICE  

    We present a human point of gaze estimation system using corneal surface reflection and omnidirectional image taken by spherical panorama cameras, which becomes popular recent years. Our system enables to find where a user is looking at only from an eye image in a 360° surrounding scene image, thus, does not need gaze mapping from partial scene images to a whole scene image that are necessary in conventional eye gaze tracking system. We first generate multiple perspective scene images from an omnidirectional (equirectangular) image and perform registration between the corneal reflection and perspective images using a corneal reflection-scene image registration technique. We then compute the point of gaze using a corneal imaging technique leveraged by a 3D eye model, and project the point to an omnidirectional image. The 3D eye pose is estimate by using the particle-filter-based tracking algorithm. In experiments, we evaluated the accuracy of the 3D eye pose estimation, robustness of registration and accuracy of PoG estimations using two indoor and five outdoor scenes, and found that gaze mapping error was 5.546 [deg] on average.

    DOI: 10.1587/transinf.2017MVP0020

    Scopus

    researchmap

    Other Link: https://dblp.uni-trier.de/db/journals/ieicet/ieicet101d.html#OgawaNN18

  • 深層学習を用いた自己撮影画像の撮影位置検索

    江川 佳輝, 小川 太士, 中澤 篤志

    電子情報通信学会技術研究報告   117 ( 392 )   333 - 337   2017.12

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

    researchmap

  • PRMU応用研究におけるオープンアイデア ~ PRMU第二期グランドチャレンジ ~ Invited

    中澤篤志, 山崎俊彦, 松下康之, 安倍 満, 舩冨卓哉, 木村昭悟, 内田誠一, 前田英作

    電子情報通信学会技術研究報告   117 ( 362 )   41 - 43   2017.12

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

    researchmap

  • DEEP eye contact detector: Robust eye contact bid detection using convolutional neural network. Reviewed

    Mitsuzumi Y, Nakazawa A, Nishida T

    British Machine Vision Conference (BMVC2017)   2017.9

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    researchmap

  • Noninvasive Corneal Image-Based Gaze Measurement System Reviewed International coauthorship

    Eunji Chong, Christian Nitschke, Atsushi Nakazawa, Agata Rozga, James M Rehg

    arXiv preprint arXiv:1708.00908   1   1 - 8   2017.8

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    researchmap

    Other Link: https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-17H01779/

  • Point of gaze estimation using corneal surface reflection and omnidirectional camera image Reviewed

    Taishi Ogawa, Atsushi Nakazawa, Toyoaki Nishida

    Proceedings of the 15th IAPR International Conference on Machine Vision Applications, MVA 2017   227 - 230   2017.7

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Institute of Electrical and Electronics Engineers Inc.  

    We present a human point of gaze estimation system using corneal surface reflection and omni-directional image taken by a fish eye. Only capturing an eye image, our system entables to find where a user is looking in 360P surrounding scene image. We first generates multiple perspective scene images from an equi-rectangular image and perform registration between corneal reflection and perspective images. We then compute the point of gaze using a 3D eye model and project the point to an omni-directional image. We evaluated the robustness of registration and accuracy of PoG estimations using two indoor and five outdoor scenes, and found that gaze mapping error was 5.526[deg] on average. This result shows the potential to the marketing and outdoor training system.

    DOI: 10.23919/MVA.2017.7986842

    Scopus

    researchmap

  • 医療コミュニケーションにおける人工知能の可能性 医師の「五感」をも定量化できる時代へ Invited

    中澤 篤志

    総合診療   27 ( 5 )   621 - 623   2017.5

     More details

    Language:Japanese   Publisher:株式会社医学書院  

    DOI: 10.11477/mf.1429200917

    researchmap

    Other Link: https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-17H01779/

  • Eye gaze tracking using corneal imaging and active illumination devices. Reviewed

    Atsushi Nakazawa, Hiroaki Kato, Christian Nitschke, Toyoaki Nishida

    Adv. Robotics   31 ( 8 )   413 - 427   2017

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:TAYLOR & FRANCIS LTD  

    This paper shows a novel eye gaze tracking (EGT) technique using the corneal imaging technique. Compared to the existing pupil center and pupil reflection techniques, our approach directly finds PoG in the reflected scene image at the human corneal surface. As a result, it does not suffer from the parallax issue and does not require per-setup system calibrations. To achieve this system, we develop following techniques: First, we use the idea of the gaze-reflection point (GRP), where light from the PoG in the scene reflects at the corneal surface into an eye image. Second, illuminating the whole scene or particular objects using coded structured light enables robust and accurate matching at the GRP to obtain the PoG in a scene image. For this purpose, we show two implementations: a special high-power IR LED-array projector and active LED markers. Experimental evaluation shows that the proposed scheme achieves considerable accuracy and successfully supports depth-varying environments as well as practical applications including observation in a conversation scene. We believe the proposed EGT technique has considerably large potential to solve major issues of current EGT systems, expands the application fields of the EGT and increases the usability of interactive systems.

    DOI: 10.1080/01691864.2016.1277552

    Web of Science

    researchmap

    Other Link: https://dblp.uni-trier.de/db/journals/ar/ar31.html#NakazawaKNN17

  • Toward autonomous collision avoidance for robotic neurosurgery in deep and narrow spaces in the brain

    Hiroaki Ueda, Ryoya Suzuki, Atsushi Nakazawa, Yusuke Kurose, Murilo M. Marinho, Naoyuki Shono, Hirofumi Nakatomi, Nobuhito Saito, Eiju Watanabe, Akio Morita, Kanako Harada, Naohiko Sugita, Mamoru Mitsuishi

    3RD CIRP CONFERENCE ON BIOMANUFACTURING   65   110 - 114   2017

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:ELSEVIER SCIENCE BV  

    The present authors have been developing a master-slave neurosurgical robot and its intelligent control for tasks in the deep and narrow spaces of the brain. This paper proposes a robotic autonomous control method for avoiding possible collisions between the shaft of a surgical robotic instrument and the surrounding tissues. To this end, a new robotic simulator was developed and used to evaluate the proposed method. The results showed the proof of concept of the proposed autonomous collision avoidance, which has the potential to enhance the safety of robotic neurosurgery in deep and narrow spaces. (C) 2016 The Authors. Published by Elsevier B.V.

    DOI: 10.1016/j.procir.2017.04.027

    Web of Science

    researchmap

  • Registration of eye reflection and scene images using an aspherical eye model Reviewed

    Atsushi Nakazawa, Christian Nitschke, Toyoaki Nishida

    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION   33 ( 11 )   2264 - 2276   2016.11

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:OPTICAL SOC AMER  

    This paper introduces an image registration algorithm between an eye reflection and a scene image. Although there are currently a large number of image registration algorithms, this task remains difficult due to nonlinear distortions at the eye surface and large amounts of noise, such as iris texture, eyelids, eyelashes, and their shadows. To overcome this issue, we developed an image registration method combining an aspherical eye model that simulates nonlinear distortions considering eye geometry and a two-step iterative registration strategy that obtains dense correspondence of the feature points to achieve accurate image registrations for the entire image region. We obtained a database of eye reflection and scene images featuring four subjects in indoor and outdoor scenes and compared the registration performance with different asphericity conditions. Results showed that the proposed approach can perform accurate registration with an average accuracy of 1.05 deg by using the aspherical cornea model. This work is relevant for eye image analysis in general, enabling novel applications and scenarios. (C) 2016 Optical Society of America

    DOI: 10.1364/JOSAA.33.002264

    Web of Science

    researchmap

  • Feedback methods for collision avoidance using virtual fixtures for robotic neurosurgery in deep and narrow spaces

    Atsushi Nakazawa, Kodai Nanri, Kanako Harada, Shinichi Tanaka, Hiroshi Nukariya, Yusuke Kurose, Naoyuki Shono, Hirohumi Nakatomi, Akio Morita, Eiju Watanabe, Naohiko Sugita, Mamoru Mitsuishi

    Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics   2016-July   247 - 252   2016.7

     More details

    Publishing type:Research paper (international conference proceedings)  

    Robotic assistance enables a surgeon to perform dexterous and precise manipulations. However, conducting robot assisted neurosurgery within the deep and narrow spaces of the brain presents the risk of unexpected collisions between the shafts of robotic instruments and their surroundings out of the microscopic view. Thus, we propose the provision of feedback using a truncated cone shaped virtual fixture generated by marking the edges of the top and bottom plane of a workspace in the deep and narrow spaces within the brain with the slave manipulator. The experimental results show that the virtual fixture generation method could precisely model the workspace. We also implemented force feedback, visual feedback, and motion scaling feedback in the microsurgical robotic system in order to inform the surgeon of the risk of collision. Performance of each feedback method and their combinations was evaluated in two experiments. The experimental results showed that the combination of the force and the visual feedback methods were the most beneficial for avoiding collisions.

    DOI: 10.1109/BIOROB.2016.7523632

    Scopus

    researchmap

  • Synthetic Evidential Study as Augmented Collective Thought Process — Preliminary Report Reviewed

    Toyoaki Nishida, Masakazu Abe, Takashi Ookaki, Divesh Lala, Sutasinee Thovuttikul, Hengjie Song, Yasser Mohammad, Christian Nitschke, Yoshimasa Ohmoto, Atsushi Nakazawa, Takaaki Shochi, Jean-Luc Rouas, Aurelie Bugeau, Fabien Lotte, Ming Zuheng, Geoffrey Letournel, Marine Guerry, Dominique Fourer

    7th Asian Conference on Intelligent Informationo and Database systems (ACIIDS)   13 - 22   2015.3

     More details

    Language:English  

    DOI: 10.1007/978-3-319-15702-3_2

    researchmap

  • Corneal imaging revisited: An overview of corneal reflection analysis and applications Reviewed

    Christian Nitschke, Atsushi Nakazawa, Haruo Takemura

    IPSJ Transactions on Computer Vision and Applications   5   1 - 18   2013.7

     More details

    Language:English  

    The cornea of the human eye acts as a mirror that reflects light from a person's environment. These corneal reflections can be extracted from an image of the eye by modeling the eye-camera geometry as a catadioptric imaging system. As a result, one obtains the visual information of the environment and the relation to the observer (view, gaze), which allows for application in a number of fields. The recovered illumination map can be further applied to various computational tasks. This paper provides a comprehensive introduction on corneal imaging, and aims to show the potential of the topic and encourage advancement. It makes a number of contributions, including (1) a combined view on previously unrelated fields, (2) an overview of recent developments, (3) a detailed explanation on anatomic structures, geometric eye and corneal reflection modeling including multiple eye images, (4) a summary of our work and contributions to the field, and (5) a discussion of implications and promising future directions. The idea behind this paper is a geometric framework to solve persisting technical problems and enable non-intrusive interfaces and smart sensors for traditional, ubiquitous and ambient environments. © 2013 Information Processing Society of Japan.

    DOI: 10.2197/ipsjtcva.5.1

    Scopus

    CiNii Article

    researchmap

  • Corneal Imaging Revisited: An Overview of Corneal Reflection Analysis and Applications

    Nitschke Christian, Nakazawa Atsushi, Takemura Haruo

    IMT   8 ( 2 )   389 - 406   2013

     More details

    Language:English   Publisher:Information and Media Technologies Editorial Board  

    The cornea of the human eye acts as a mirror that reflects light from a person's environment. These corneal reflections can be extracted from an image of the eye by modeling the eye-camera geometry as a catadioptric imaging system. As a result, one obtains the visual information of the environment and the relation to the observer (view, gaze), which allows for application in a number of fields. The recovered illumination map can be further applied to various computational tasks. This paper provides a comprehensive introduction on corneal imaging, and aims to show the potential of the topic and encourage advancement. It makes a number of contributions, including (1) a combined view on previously unrelated fields, (2) an overview of recent developments, (3) a detailed explanation on anatomic structures, geometric eye and corneal reflection modeling including multiple eye images, (4) a summary of our work and contributions to the field, and (5) a discussion of implications and promising future directions. The idea behind this paper is a geometric framework to solve persisting technical problems and enable non-intrusive interfaces and smart sensors for traditional, ubiquitous and ambient environments.

    DOI: 10.11185/imt.8.389

    CiNii Article

    researchmap

  • Motion Coherent Tracking Using Multi-label MRF Optimization Reviewed

    David Tsai, Matthew Flagg, Atsushi Nakazawa, James M. Rehg

    INTERNATIONAL JOURNAL OF COMPUTER VISION   100 ( 2 )   190 - 202   2012.11

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:SPRINGER  

    We present a novel off-line algorithm for target segmentation and tracking in video. In our approach, video data is represented by a multi-label Markov Random Field model, and segmentation is accomplished by finding the minimum energy label assignment. We propose a novel energy formulation which incorporates both segmentation and motion estimation in a single framework. Our energy functions enforce motion coherence both within and across frames. We utilize state-of-the-art methods to efficiently optimize over a large number of discrete labels. In addition, we introduce a new ground-truth dataset, called Georgia Tech Segmentation and Tracking Dataset (GT-SegTrack), for the evaluation of segmentation accuracy in video tracking. We compare our method with several recent on-line tracking algorithms and provide quantitative and qualitative performance comparisons.

    DOI: 10.1007/s11263-011-0512-5

    Web of Science

    researchmap

  • Point of Gaze Estimation through Corneal Surface Reflection in an Active Illumination Environment Reviewed

    Atsushi Nakazawa, Christian Nitschke

    COMPUTER VISION - ECCV 2012, PT II   7573 ( PART 2 )   159 - 172   2012

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:SPRINGER-VERLAG BERLIN  

    Eye gaze tracking (EGT) is a common problem with many applications in various fields. While recent methods have achieved improvements in accuracy and usability, current techniques still share several limitations. A major issue is the need for external calibration between the gaze camera system and the scene, which commonly restricts to static planar surfaces and leads to parallax errors. To overcome these issues, the paper proposes a novel scheme that uses the corneal imaging technique to directly analyze reflections from a scene illuminated with structured light. This comprises two major contributions: First, an analytic solution is developed for the forward projection problem to obtain the gaze reflection point (GRP), where light from the point of gaze (PoG) in the scene reflects at the corneal surface into an eye image. We also develop a method to compensate for the individual offset between the optical axis and true visual axis. Second, introducing active coded illumination enables robust and accurate matching at the GRP to obtain the PoG in a scene image, which is the first use of this technique in EGT and corneal reflection analysis. For this purpose, we designed a special high-power IR LED-array projector. Experimental evaluation with a prototype system shows that the proposed scheme achieves considerable accuracy and successfully supports depth-varying environments.

    DOI: 10.1007/978-3-642-33709-3_12

    Web of Science

    researchmap

  • Display-camera calibration using eye reflections and geometry constraints Reviewed

    Christian Nitschke, Atsushi Nakazawa, Haruo Takemura

    COMPUTER VISION AND IMAGE UNDERSTANDING   115 ( 6 )   835 - 853   2011.6

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:ACADEMIC PRESS INC ELSEVIER SCIENCE  

    In this paper, we describe a novel method for calibrating display-camera setups from reflections in a user's eyes. Combining both devices creates a capable controlled illumination system that enables a range of interesting vision applications in non-professional environments, including object/face reconstruction and human computer interaction. One major issue barring such systems from average homes is the geometric calibration to obtain the pose of the display which requires special hardware and tedious user interaction. Our proposed approach eliminates this requirement by introducing the novel idea of analyzing screen reflections in the cornea of the human eye, a mirroring device that is always available. We employ a simple shape model to recover pose and reflection characteristics of the eye. Thorough experimental evaluation shows that the basic strategy results in a large error and discusses possible reasons. Based on the findings, a non-linear optimization strategy is developed that exploits geometry constraints within the system to considerably improve the initial estimate. It further allows to automatically resolve an inherent ambiguity that arises in image-based eye pose estimation. The strategy may also be integrated to improve spherical mirror calibration. We describe several comprehensive experimental studies which show that the proposed method performs stably with respect to varying subjects, display poses, eye positions, and gaze directions. The results are feasible and should be sufficient for many applications. In addition, the findings provide general insight on the application of eye reflections for geometric reconstruction. (C) 2011 Elsevier Inc. All rights reserved.

    DOI: 10.1016/j.cviu.2011.02.008

    Web of Science

    researchmap

  • Saliency Detection

    NAKAZAWA Atsushi

    The Journal of the Institute of Television Engineers of Japan   64 ( 12 )   1830 - 1832   2010.12

     More details

    Language:Japanese   Publisher:The Institute of Image Information and Television Engineers  

    DOI: 10.3169/itej.64.1830

    CiNii Article

    CiNii Books

    researchmap

    Other Link: https://jlc.jst.go.jp/DN/JALC/00360473643?from=CiNii

  • Motion capture systems Reviewed

    Atsushi Nakazawa

    Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers   63 ( 9 )   1224 - 1227   2009

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)   Publisher:Inst. of Image Information and Television Engineers  

    DOI: 10.3169/itej.63.1224

    Scopus

    researchmap

  • Task recognition and person identification in cyclic dance sequences with Multi Factor Tensor analysis Reviewed

    Manoj Perera, Takaaki Shiratori, Shunsuke Kudoh, Atsushi Nakazawa, Katsushi Ikeuchi

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E91D ( 5 )   1531 - 1542   2008.5

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG  

    In this paper, we present a novel approach to recognize motion styles and identify people using the Multi Factor Tensor (MFT) model. We apply a musical information analysis method in segmenting the motion sequence relevant to the keyposes and the musical rhythm. We define a task model by considering the repeated motion segments, where the motion is decomposed into a person-invariant factor task and a person-dependent factor style. Given the motion data set, we formulate the MFT model, factorize it efficiently in different modes, and use it in recognizing the tasks and the identities of the persons performing the tasks. We capture the motion data of different people for a few cycles, segment it using the musical analysis approach, normalize the segments using a vectorization method, and realize our MFT model. In our experiments, Japanese traditional dance sequences performed by several people are used. Provided with an unknown motion segment which is to be probed and which was performed at a different time in the time space, we first normalize the motion segment and flatten our MFT model appropriately, then recognize the task and the identity of the person. We follow two approaches in conducting our experiments. In one experiment, we recognize the tasks and the styles by maximizing a function in the tensor subdomain, and in the next experiment, we use a function value in the tensorial subdomain with a threshold for recognition. Interestingly, unlike the first experiment, we are capable of recognizing tasks and human identities that were not known beforehand. We conducted various experiments to evaluate the potential of the recognition ability of our proposed approaches, and the results demonstrate the high accuracy of our model.

    DOI: 10.1093/ietisy/e91-d.5.1531

    Web of Science

    CiNii Article

    researchmap

  • Development and Evaluation of a Diskless Linux System for Educational Computer System Reviewed

    MASUDA HIDEO, OGAWA TAKEFUMI, MACHIDA TAKASHI, NAKAZAWA ATSUSHI, KIYOKAWA KIYOSHI, TAKEMURA HARUO

    IPSJ journal   49 ( 3 )   1239 - 1248   2008.3

     More details

    Language:Japanese   Publisher:一般社団法人情報処理学会  

    This paper shows the configurations and evaluations of the educational computer system in our university that serves variety types of applications for many numbers of users, reducing Total Cost of Ownership (TCO). As for client operating systems, we developed newly designed diskless Linux. The OS and other applications are loaded from servers through networks; therefore the clients don't have hard drives. This scheme drastically reduces TCO because it is not necessary to replace hard drives which fail most frequently in these types of systems. Also, the software updates of the clients are achieved just by uploading the files in few servers. This also reduces the maintenance cost rather than updating the software stored in local hard drives of many clients. As for the client application software, we installed not only OpenOffice.org or StarSuite, but also Microsoft Office combined with Crossover Office emulator that is highly demanded by end users.

    CiNii Article

    CiNii Books

    J-GLOBAL

    researchmap

  • Iterative refinement of range images with anisotropic error distribution Reviewed

    Ryusuke Sagawa, Takeshi Oishi, Atsushi Nakazawa, Ryo Kurazume, Katsushi Ikeuchi

    Digitally Archiving Cultural Objects   193 - 205   2008

     More details

    Language:English   Publishing type:Part of collection (book)   Publisher:Springer US  

    We propose a method which refines the range measurement of range finders by computing correspondences of vertices of multiple range images acquired from various viewpoints. Our method assumes that a range image acquired by a laser range finder has anisotropic error distribution which is parallel to the ray direction. Thus, we find corresponding points of range images along with the ray direction. We iteratively converge range images to minimize the distance of corresponding points. We describe the effectiveness of our method by the presenting the experimental results of artificial and real range data. Also we show that our method refines a 3D shape more accurately as opposed to that achieved by using the Gaussian filter. © 2008 Springer-Verlag US.

    DOI: 10.1007/978-0-387-75807_11

    Scopus

    researchmap

  • A fast simultaneous alignment of multiple range images Reviewed

    Takeshi Oishi, Atsushi Nakazawa, Ryo Kurazume, Katsushi Ikeuchi

    Digitally Archiving Cultural Objects   89 - 107   2008

     More details

    Language:English   Publishing type:Part of collection (book)   Publisher:Springer US  

    This chapter describes a fast, simultaneous alignment method for a large number of range images. Generally the most time-consuming task in aligning range images is searching corresponding points. The fastest searching method is the "Inverse Calibration" method. However, this method requires pre-computed lookup tables and precise sensor parameters. We propose a fast searching method using "index images", which work as look-up tables and are rapidly created without any sensor parameters by using graphics hardware. To accelerate the computation to estimate rigid transformations, we employed a linear error evaluation method. When the number of range images increases, the computation time for solving the linear equations becomes too long because of the large size of the coefficient matrix. On the other hand, the coefficient matrix has the characteristic of becoming sparser as the number of range images increases. Thus, we applied the Incomplete Cholesky Conjugate Gradient (ICCG) method to solve the equations and found that the ICCG greatly accelerates the matrix operation by pre-conditioning the coefficient matrix. Some experimental results in which a large number of range images are aligned demonstrate the effectiveness of our method. © 2008 Springer-Verlag US.

    DOI: 10.1007/978-0-387-75807_6

    Scopus

    researchmap

  • Breed differentiation among Japanese native chickens by specific skull features determined by direct measurements and computer vision techniques Reviewed

    Y. Ino, T. Oka, K. Nomura, T. Watanabe, S. Kawashima, T. Amano, Y. Hayashi, A. Okabe, Y. Uehara, T. Masuda, J. Takamatsu, A. Nakazawa, K. Ikeuchi, H. Endo, K. Fukuta, F. Akishinonomiya

    BRITISH POULTRY SCIENCE   49 ( 3 )   273 - 281   2008

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:TAYLOR & FRANCIS LTD  

    1. Inter-breed morphological comparisons were made among 11 breeds of Japanese native chickens (Gifujidori, Hinaidori, Shokoku, Totenko, Tomaru, Satsumadori, Shamo, Koshamo, Koeyoshi, Chabo and Nagoya), White Leghorn, broiler chickens (Chunky) and red junglefowl collected in the Philippines, based on results of direct measurements and analysis by computer vision techniques of the skull.
    2. Analysis of direct measurements identified two groups of chicken: a small type that included the Chabo, Koshamo, red junglefowl, Gifujidori and Shokoku and a large type that included the remaining breeds studied. These groupings were made based on size determined both in the first (PC1) and second principal component (PC2). The greatest length of the cranium and condylobasal length greatly contributed to the morphological differences between these two groups.
    3. Analysis by computer vision techniques, however, identified three groups: the Bantam group (which includes red junglefowl), Shokoku group and Shamo group. White Leghorn clustered within the Shokoku group while the broiler chicken belonged to the Shamo group. The region around the junction of the neural cranium and the visceral cranium contributed greatly to the morphological differences among breeds, both in the PC1 and PC2.

    DOI: 10.1080/00071660802094727

    Web of Science

    J-GLOBAL

    researchmap

  • Parallel alignment of a large number of range images Reviewed

    Takeshi Oishi, Atsushi Nakazawa, Ryo Kurazume, Katsushi Ikeuchi, Ryusuke Sagawa

    Digitally Archiving Cultural Objects   109 - 126   2008

     More details

    Language:English   Publishing type:Part of collection (book)   Publisher:Springer US  

    This chapter describes a method for parallel alignment of multiple range images. There are problems of computational time and memory space in aligning a large number of range images simultaneously. We developed a parallel method to address the problems. Searching for corresponding points between two range images is time-consuming and requires considerable memory space when performed independently. However, this process can be preformed in parallel, with each corresponding pair of range images assigned to a node. Because the computation time is approximately proportional to the number of vertices, by assigning the pairs so that the number of vertices computed is equal on each node, the load on each node is effectively distributed. In order to reduce the amount of memory required on each node, a hypergraph that represents the correspondences of range images is created, and heuristic graph partitioning algorithms are applied to determine the optimal assignment of the pairs. Moreover, by rejecting redundant dependencies, it becomes possible to accelerate computation time and reduce the amount of memory required on each node. The method was tested on a 16-processor PC cluster, where it demonstrated high extendibility and improved performance. © 2008 Springer-Verlag US.

    DOI: 10.1007/978-0-387-75807_7

    Scopus

    researchmap

  • The Great Buddha project: Digitally archiving, restoring, and analyzing cultural heritage objects Reviewed

    Katsushi Ikeuchi, Takeshi Oishi, Jun Takamatsu, Ryusuke Sagawa, Atsushi Nakazawa, Ryo Kurazume, Ko Nishino, Mawo Kamakura, Yasuhide Okamoto

    INTERNATIONAL JOURNAL OF COMPUTER VISION   75 ( 1 )   189 - 208   2007.10

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:SPRINGER  

    This paper presents an overview of our research project on digital preservation of cultural heritage objects and digital restoration of the original appearance of these objects. As an example of these objects, this project focuses on the preservation and restoration of the Great Buddhas. These are relatively large objects existing outdoors and providing various technical challenges. Geometric models of the great Buddhas are digitally achieved through a pipeline, consisting of acquiring data, aligning multiple range images, and merging these images. We have developed two alignment algorithms: a rapid simultaneous algorithm, based on graphics hardware, for quick data checking on site, and a parallel alignment algorithm, based on a PC cluster, for precise adjustment at the university. We have also designed a parallel voxel-based merging algorithm for connecting all aligned range images. On the geometric models created, we aligned texture images acquired from color cameras. We also developed two texture mapping methods. In an attempt to restore the original appearance of historical objects, we have synthesized several buildings and statues using scanned data and a literature survey with advice from experts.

    DOI: 10.1007/s11263-007-0039-y

    Web of Science

    researchmap

  • Learning from observation paradigm: Leg task models for enabling a biped humanoid robot to imitate human dances Reviewed

    Shin'ichiro Nakaoka, Atsushi Nakazawa, Fumio Kanehiro, Kenji Kaneko, Mitsuharu Morisawa, Hirohisa Hirukawa, Katsushi Ikeuchi

    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH   26 ( 8 )   829 - 844   2007.8

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:SAGE PUBLICATIONS LTD  

    This paper proposes a framework that achieves the Learning from Observation paradigm for learning dance motions. The framework enables a humanoid robot to imitate dance motions captured from human demonstrations. This study especially focuses on leg motions to achieve a novel attempt in which a biped-ope robot imitates not only upper body motions but also leg motions including steps. Body differences between the robot and the original dancer make the problem difficult because the differences prevent the robot front straight forward-v following the original motions and they also change dynamic body balance. We propose leg task models, which play a key role in solving the problem. Low-level tasks in leg motion are modelled so that they clearly provide essential information required,for keeping dynamic stability and important motion characteristics. The models divide the problem of adapting motions into the problem of recognizing a sequence of the tasks and the problem of executing the task sequence. We have developed a method for recognizing the tasks from captured motion data and a method for generating the motions of the tasks that can be executed by existing robots including HRP-2. HRP-2 successfully performed the generated motions, which imitated a traditional folk dance performed by human dancers.

    DOI: 10.1177/0278364907079430

    Web of Science

    researchmap

  • Real-time space carving using graphics hardware Reviewed

    Christian Nitschke, Atsushi Nakazawa, Haruo Takemura

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E90D ( 8 )   1175 - 1184   2007.8

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG  

    Reconstruction of real-world scenes from a set of multiple images is a topic in computer vision and 3D computer graphics with many interesting applications. Attempts have been made to real-time reconstruction on PC cluster systems. While these provide enough performance, they are expensive and less flexible. Approaches that use a GPU hardware-acceleration on single workstations achieve real-time framerates for novel-view synthesis, but do not provide an explicit volumetric representation. This work shows our efforts in developing a GPU hardware-accelerated framework for providing a photo-consistent reconstruction of a dynamic 3D scene. High performance is achieved by employing a shape from silhouette technique in advance. Since the entire processing is done on a single PC, the framework can be applied in mobile environments, enabling a wide range of further applications. We explain our approach using programmable vertex and fragment processors and compare it to highly optimized CPU implementations. We show that the new approach can outperform the latter by more than one magnitude and give an outlook for interesting future enhancements.

    DOI: 10.1093/ietisy/e90-d.8.1175

    Web of Science

    researchmap

  • Human pose estimation from volume data and topological graph database Reviewed

    Hidenori Tanaka, Atsushi Nakazawa, Haruo Takemura

    COMPUTER VISION - ACCV 2007, PT I, PROCEEDINGS   4843 ( PART 1 )   618 - +   2007

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:SPRINGER-VERLAG BERLIN  

    This paper proposes a novel volume-based motion capture method using a bottom-up analysis of volume data and an example topology database of the human body. By using a two-step graph matching algorithm with many example topological graphs corresponding to postures that a human body can take, the proposed method does not require any initial parameters or iterative convergence processes, and it can solve the changing topology problem of the human body. First, three-dimensional curved lines (skeleton) are extracted from the captured volume data using the thinning process. The skeleton is then converted into an attributed graph. By using a graph matching algorithm with a large amount of example data, we can identify the body parts from each curved line in the skeleton. The proposed method is evaluated using several video sequences of a single person and multiple people, and we can confirm the validity of our approach.

    Web of Science

    researchmap

  • An Interactive Content Management System with Hierarchized 3D Data

    Yousuke Kimura, Tomohiro Mashita, Atsushi Nakazawa, Takashi Machida, Kiyoshi Kiyokawa, Haruo Takemura

    ヒューマンインタフェースシンポジウム   2006.9

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • A Large 3D Data Rendering Java Applet with appending annotations

    Yousuke Kimura, Atsuhi Nakazawa, Takashi Machida, Kiyoshi Kiyokawa, Haruo Takemura

    情報処理学会 マルチメディア,分散,協調とモバイルシンポジウム(DICOMO)論文集   2006.7

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • Leg Task Models for Reproducing Human Dance Motions on Biped Humanoid Robots

    NAKAOKA Shinichiro, NAKAZAWA Atsushi, KANEHIRO Fumio, KANEKO Kenji, MORISAWA Mitsuharu, HIRUKAWA Hirohisa, IKEUCHI Katsushi

    JRSJ   24 ( 3 )   388 - 399   2006.4

     More details

    Language:Japanese   Publisher:The Robotics Society of Japan  

    In this paper, we propose a method that enables a biped humanoid robot to reproduce human dance motions with its whole body. Our method is based on the paradigm ofLearning from Observation. In this study, a robot uses its own legs to support the body during a dance performance. We proposeleg task models, which can solve the problems caused by severe constraints in adapting human motions to the legs of a robot. First, elements of the leg task models are recognized from motion data captured from human performances. Then motion data of a robot is regenerated from the recognized elements so that the motion is stably executable on the robot. Our method was verified by experiments on a humanoid robotHRP-2using a traditional folk dance. HRP-2 successfully performed dance motions that were automatically reproduced from motion data captured from human dance performances.

    DOI: 10.7210/jrsj.24.388

    CiNii Article

    CiNii Books

    researchmap

    Other Link: https://jlc.jst.go.jp/DN/JALC/00278397673?from=CiNii

  • A Large 3D Data Rendering System for Mobile Computer with Java Applet

    Yousuke Kimura, Atsushi Nakazawa, Takashi Machida, Kiyoshi Kiyokawa, Haruo Takemura

    電子情報通信学会 総合大会講演論文集   2006.3

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • Path Planning using Potential Field for 3D Reconstruction of a Disaster Site by a Mobile Robot

    Katsuya KAWAI, Atsushi NAKAZAWA, Kiyoshi KIYOKAWA, Haruo TAKEMURA

    電子情報通信学会 技術研究報告   2006.2

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • Hierarchical 3D data rendering system synchronizing with HTML Reviewed

    Yousuke Kimura, Tomohiro Mashita, Atsushi Nakazawa, Takashi Machida, Kiyoshi Kiyokawa, Haruo Tamekura

    ADVANCES IN ARTIFICIAL REALITY AND TELE-EXISTENCE, PROCEEDINGS   4282   1212 - +   2006

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:SPRINGER-VERLAG BERLIN  

    We propose a new rendering system for large-scale, 3D geometic data that can be used with web-based content management systems (CMS). To achieve this, we employed a geometry hierarchical encoding method "QSplat" and implemented this in a Java and JOGL (Java bindings of OpenGL) environment. Users can view large-scale geometric data using conventional HTML browsers with a non-powerful CPU and low-speed networks. Further, this system is independent of the platforms. We add new functionalities so that users can easily understand the geometric data: Annotations and HTML Synchronization. Users can see the geometric data with the associated annotations that describe the names or the detailed explanations of the particular portions. The HTML Synchronization enables users to smoothly and interactively switch our rendering system and HTML contents. The experimental results show that our system performs an interactive frame rate even for a large-scale data whereas other systems cannot render them.

    DOI: 10.1007/11941354_126

    Web of Science

    researchmap

  • Dancing-to-music character animation Reviewed

    Takaaki Shiratori, Atsushi Nakazawa, Katsushi Ikeuchi

    COMPUTER GRAPHICS FORUM   25 ( 3 )   449 - 458   2006

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:WILEY-BLACKWELL  

    In computer graphics, considerable research has been conducted on realistic human motion synthesis. However most research does not consider human emotional aspects, which often strongly affect human motion. This paper presents a new approach for synthesizing dance performance matched to input music, based on the emotional aspects of dance performance. Our method consists of a motion analysis, a music analysis, and a motion synthesis based on the extracted features. In the analysis steps, motion and music feature vectors are acquired. Motion vectors are derived from motion rhythm and intensity, while music vectors are derived from musical rhythm, structure, and intensity. For synthesizing dance performance, we first find candidate motion segments whose rhythm features are matched to those of each music segment, and then we find the motion segment set whose intensity is similar to that of music segments. Additionally, our system supports having animators control the synthesis process by assigning desired motion segments to the specified music segments. The experimental results indicate that our method actually creates dance performance as if a character was listening and expressively dancing to the music.

    DOI: 10.1111/j.1467-8659.2006.00964.x

    Web of Science

    researchmap

  • Development of a Control System of a Mobile Robotfor 3D Reconstruction of a Disaster Area

    Katsuya KAWAI, Atsushi NAKAZAWA, Kiyoshi KIYOKAWA, Haruo TAKEMURA

    電子情報通信学会 技術研究報告, MVE2005-29   105 ( 256 )   13 - 18   2005.9

     More details

    Language:Japanese   Publishing type:Research paper (other academic)   Publisher:The Institute of Electronics, Information and Communication Engineers  

    Three dimensional geometry information, as well as live video image, of a disaster site is useful for damage investigation and rescue planning. In this report, presented is an information gathering mobile robot that acquires geometry information of a disaster site. After moving to every location directed by a remote operator, the robot acquires omnidirectional range data by using an omnidirectional range sensor, and estimates its position and orientation from the amount of wheels' rotation and an orientation sensor. Using the estimated location as the initial value, current and past range data are registered by the ICP algorithm. Indoor experiments have shown that the self-localization error was within five percents.

    CiNii Article

    CiNii Books

    researchmap

  • Parameter Estimation of Natural Quadric Surfaces from Range Image Reviewed

    Atsushi Nakazawa, Kiyoshi Kiyokawa, Haruo Takemura, Kokushi Yamamoto

    画像の認識と理解シンポジウム   2005.7

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • Development of a measurement and presentation system for 3 dimensional reconstruction with a mobile robot

    Katsuya Kawai, Kensaku Saitoh, Takashi Machida, Atsushi Nakazawa, Kiyoshi Kiyokawa, Haruo Takemura

    画像の認識・理解シンポジウム(MIRU)講演論文集   2005.7

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • Parameter Estimation of Natural Quadric Surfaces from Range Image

    Kokushi Yamamoto, Atsushi Nakazawa, Kiyoshi Kiyokawa, Haruo Takemura

    電子情報通信学会技術報告PRMU2004-188   2005.2

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • 多関節 CG モデルと距離画像による上半身の姿勢推定

    平尾 公男, 中澤 篤志, 清川 清, 竹村 治雄

    電子情報通信学会 技術研究報告   2005.1

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • A Novel Osteometrical Method Using Computer Vision Techniques for Comparison of Morphological Differences

    Takamatsu Jun, Uehara Yasuhiko, Masuda Tomohito, Nakazawa Atsushi, Ikeuchi Katsushi, Okabe Atsuyuki, Hayashi Yoshihiro, Ino Yasuko, Oka Takao, Nomura Koh, Amano Takashi, Akishinonomiya Fumihito

    Journal of the Yamashina Institute for Ornithology   36 ( 2 )   120 - 128   2005

     More details

    Language:English   Publisher:Yamashina Institute for Ornitology  

    DOI: 10.3312/jyio.36.120

    CiNii Article

    researchmap

  • ユビキスタVR 学習システムおよびコンテンツの開発 Reviewed

    中澤篤志, 梶田将司, 角所考

    日本バーチャ ルリアリティ学会誌, 第10 巻2 号, pp. 98-103, 2005-6.   2005

     More details

    Language:Japanese  

    researchmap

  • ハフ変換と期待値最大化法による距離画像からの二次曲面モデルのパラメータ推定

    山本 国士, 中澤 篤志, 清川 清, 竹村 治雄

    電子情報通信学会 技術研究報告   2004.12

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • Pose Estimation of Human Upper Body Using Multi-joint CG Model and Stereo Video Images Reviewed

    Kimio Hirao, Atsushi Nakazawa, Kiyoshi Kiyokawa, Haruo Takemura

    Proc. Int. Conf. on Artificial Reality and Telexistence (ICAT)   2004.11

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    researchmap

  • 単眼カメラとモーションキャプチャデータによる両腕の姿勢推定

    平尾 公男, 中澤 篤志, 清川 清, 竹村 治雄

    ヒューマンインタフェースシンポジウム講演論文集   2004.10

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • レーザスキャナと回転台を用いた遠隔地の 3 次元環境伝送システム

    河合 克哉, 中澤 篤志, 清川 清, 竹村 治雄

    日本バーチャルリアリティ学会 大会論文集   2004.9

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • 多関節 CG モデル画像と人物画像の残差マッチングによる両腕の姿勢推定

    平尾 公男, 中澤 篤志, 清川 清, 竹村 治雄

    電子情報通信学会 総合大会講演論文集, D-12-125, Mar. 2004.   2004.3

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • 実時間レンジファインダと回転台を用いた遠隔地環境の三次元伝送システムの提案

    河合 克哉, 清川 清, 中澤 篤志, 竹村 治雄

    電子情報通信学会 総合大会講演論文集, D-11-78, Mar. 2004.   2004.3

     More details

    Language:Japanese   Publishing type:Research paper (other academic)  

    researchmap

  • Leg motion primitives for a dancing humanoid robot Reviewed

    S Nakaoka, A Nakazawa, K Yokoi, K Ikeuchi

    2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS   610 - 615   2004

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    The goal of the study described in this paper is to develop a total technology for archiving human dance motions. A key feature of this technology is a dance replay by a humanoid robot. Although human dance motions can be acquired by a motion capture system, a robot cannot exactly follow the captured data because of different body structure and physical properties between the human and the robot. In particular, leg motions are too constrained to be converted from the captured data because the legs must interact with the floor and keep dynamic balance within the mechanical constraints of current robots. To solve this problem, we have designed a symbolic description of leg motion primitives in a dance performance. Human dance actions are recognized as a sequence of primitives and the same actions of the robot can be regenerated from them. This framework is more reasonable than modifying the original motion to adapt the robot constraints. We have developed a system to generate feasible robot motions from a human performance, and realized a dance performance by the robot HRP-1S.

    Web of Science

    researchmap

  • Creating Virtual Buddha Statues through Observation Reviewed

    Katsushi Ikeuchi, Atsushi Nakazawa, Ko Nishino, Takeshi Oishi

    IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops   1   2003

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE Computer Society  

    This paper overviews our research on digital preservation of cultural assets and digital restoration of their original appearance. Geometric models are digitally achieved through a pipeline consisting of scanning, registering and merging multiple range images. We have developed a robust simultaneous registration method and an efficient and robust voxel-based integration method. On the geometric models created, we have to align texture images acquired from a color camera. We have developed two texture mapping methods. In an attempt to restore the original appearance of historical heritage objects, we have synthesized several buildings and statues using scanned data and literature survey with advice from experts.

    DOI: 10.1109/CVPRW.2003.10001

    Scopus

    researchmap

  • Rhythmic motion analysis using motion capture and musical information Reviewed

    T Shiratori, A Nakazawa, K Ikeuchi

    PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS   89 - 94   2003

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    The number of Japanese traditional dancers has been decreasing. Without performers, some dances will be disappeared, because they can not be recorded by conventional media, such as paper We have proposed an archiving method specifically for dancing pattern. Our method has four main stages: 1. Digitizing motions by motion capture systems, 2. Analyzing motions, 3. Synthesizing motions, 4. Reproducing the dance motions by CG and humanoid robots. In order to effectively record the moving patterns, motion primitives are extracted. Each motion primitive describes each basic motion. However most previous primitive extraction fails to provide the rhythm which results in unrhythmic synthesized motion. In this paper we propose a new motion analysis method, which integrates music rhythm into the motion primitives. Our experiment confirmed that our motion analysis yielded the motion primitives in accordance to the music rhythm.

    Web of Science

    researchmap

  • Analysis and synthesis of human dance motions Reviewed

    A Nakazawa, S Nakaoka, T Shiratori, K Ikeuchi

    PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS   83 - 88   2003

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    This paper presents the method for synthesizing stylistic human motions through visual observation. The human motion data is acquired from a motioncapture system. The whole motion sequence is divided into some motion elements and clusterd into some groups according to the correlation of end-effectors' trajectories. We call these segments as 'motion primitives'. Concatenating these motion primitives, we can generates new dance motions. We also think of a motion primitive consists of a basic motion and a motion style. The basic motion is common to all dancers, and the style represents their characteristics. We extracted these two components through the further analysis steps. The experiment results shows the validity of our approach.

    Web of Science

    researchmap

  • Synthesize stylistic human motion from examples Reviewed

    A Nakazawa, S Nakaoka, K Ikeuchi

    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS   3899 - 3904   2003

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    The human body motion synthesis is highly necessary for humanoid robots' motion planning and computer animations. In this paper, new method for generating human-like natural motions based on the motion database acquired by motion capture systems is described. On the analysis step, the acquired motions are divided into some motion segments, and then the characteristic poses and motions are archived as 'motion styles'. The motion style is a kind of the human skill, and it's unique to the motions' scenario, such as the different kinds of dances. On the synthesis step, users direct the key poses of human figures. The system generates the characteristic motions according to the user's directions and motion style database. The experiment result shows that this method can synthesize the realistic 'stylized' motions with this framework.

    Web of Science

    researchmap

  • Parallel alignment of a large number of range images Reviewed

    T Oishi, R Sagawa, A Nakazawa, R Kurazume, K Ikeuchi

    FOURTH INTERNATIONAL CONFERENCE ON 3-D DIGITAL IMAGING AND MODELING, PROCEEDINGS   2003-January   195 - 202   2003

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE COMPUTER SOC  

    This paper describes a method for parallel alignment of multiple range images. It is difficult to align a large number of range images simultaneously. Therefore, we developed the parallel method to improve the time and memory performances of the alignment process. Although a general simultaneous alignment algorithm searches correspondences for all pairs of all range images by rejecting redundant dependencies, our method makes it possible to accelerate computation time and reduce the amount of memory used Since the computation between two range images can be preformed independently, each correspondence pair of range images is assigned to each node. Because the computation time is proportional to the number of vertices assigned to each node, by assigning the pairs so that the number of vertices computed is equal on each node, the load on each node is effectively distributed. The heuristic algorithms for graph partitioning are applied to this problem in order to reduce the amount of memory used on each node. The method was tested on a 16 processor PC cluster, where it demonstrated the high extendibility and the performance improvement in time and memory.

    DOI: 10.1109/IM.2003.1240250

    Web of Science

    Scopus

    researchmap

  • Iterative Refinement of Range Images with Anisotropic Error Distribution Reviewed

    Ryusuke Sagawa, Takeshi Oishi, Atsushi Nakazawa, Ryo Kurazume, Katsushi Ikeuchi

    IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, September 30 - October 4, 2002   1   79 - 85   2002

     More details

    Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

    We propose a method which refines the range measurement of range finders by computing correspondences of vertices of multiple range images acquired from various viewpoints. Our method assumes that a range image acquired by a laser range finder has anisotropic error distribution which is parallel to the ray direction. Thus, we find corresponding points of range images along with the ray direction. We iteratively converge range images to minimize the distance of corresponding points. We demonstrate the effectiveness of our method by presenting the experimental results of artificial and real range data. Also we show that our method refines a 3D shape more accurately as opposed to that achieved by using the Gaussian filter.

    DOI: 10.1007/978-0-387-75807_11

    Scopus

    researchmap

    Other Link: http://dblp.uni-trier.de/db/conf/iros/iros2002.html#conf/iros/SagawaONKI02

▼display all

Books

  • Corneal Imaging - The Wiley Handbook of Human Computer Interaction

    Christian Nitschke, Atsushi Nakazawa( Role: Joint author)

    Wiley-Blackwell  2018.3 

     More details

  • Motion Capture - The Wiley Handbook of Human Computer Interaction

    NAKAZAWA Atsushi, Taaaki Shiratori( Role: Joint author)

    Wiley-Blackwell  2017.3 

     More details

  • Conversational Informatics: A Data-Intensive Approach with Emphasis on Nonverbal Communication

    Toyoaki Nishida, Atsushi Nakazawa, Yoshimasa Ohmoto( Role: Joint author)

    Springer  2014.8 

     More details

    Language:English

    researchmap

MISC

  • Estimating a person’s internal state and its application for understanding medical and nursing care interactions

    中澤篤志

    情報処理学会研究報告(Web)   2023 ( CVIM-232 )   2023

  • 介護×情報処理-高齢化社会の中の看護・介護の未来と情報処理-

    中澤篤志

    情報処理   64 ( 8 )   2023

  • THE EFFECT OF COMMUNICATION SKILLS TRAINING FOR NURSING STUDENTS BY AUGMENTED REALITY SIMULATION SYSTEM

    Masaki Kobayashi, Miyuki Iwamoto, Saki Une, Ryo Kurazume, Atsushi Nakazawa, Miwako Honda

    INNOVATION IN AGING   6   440 - 440   2022.11

     More details

    Language:English   Publishing type:Research paper, summary (international conference)   Publisher:OXFORD UNIV PRESS  

    Web of Science

    researchmap

  • 「優しい介護」インタラクションの計算的・脳科学的解明

    中澤篤志

    戦略的創造研究推進事業CREST研究終了報告書(Web)   2022   2022

  • Detection of human boredom from video

    立川悠輝, 中澤篤志

    電子情報通信学会技術研究報告(Web)   122 ( 200(MVE2022 18-33) )   2022

  • Estimation of out-of-view attention region

    原航基, 中澤篤志

    電子情報通信学会技術研究報告(Web)   122 ( 200(MVE2022 18-33) )   2022

  • Generation of natural agent behavior based on machine learning

    宮澤恒光, 中澤篤志

    電子情報通信学会技術研究報告(Web)   122 ( 200(MVE2022 18-33) )   2022

  • Understanding the tender-care skills using sensing and AI

    中澤篤志

    電子情報通信学会大会講演論文集(CD-ROM)   2022   2022

  • A multimodal analysis of mother-child interaction in children with ASD-A single case study of a girl with verbal communication difficulty-

    伊藤凌太朗, 松島佳苗, 長岡千賀, アンミー, 宇田あかね, 加藤寿宏, 吉川左紀子, 中澤篤志, 本田美和子, ジネスト イヴ, 安藤夏子, 岩元美由紀

    電子情報通信学会技術研究報告(Web)   121 ( 438(HCS2021 61-75) )   2022

  • Generation and Application of Drawing Style Space for Character Face Images

    井上剛志, 中澤篤志

    電子情報通信学会技術研究報告(Web)   122 ( 200(MVE2022 18-33) )   2022

  • 中澤篤志 倉爪亮 本田美和子

    2022

     More details

  • Emotional and Physiological Reactions of Contactee in Multimodal Communication with Touch: Guide to the Technical Report and Template

    岩元美由紀, 桜栄翔太, 中澤篤志

    電子情報通信学会技術研究報告(Web)   122 ( 23(HCS2022 1-34) )   2022

  • 接触者の接触行動に対する非接触者の感情による生理反応に関する研究

    岩元美由紀, 中澤篤志

    電子情報通信学会HCGシンポジウム2021   2021.12

     More details

  • ユマニチュードによる自閉スペクトラム障害児と親の行動変容に関する研究

    井上翔太, 中澤篤志, 岩元美由紀, 加藤寿宏, 吉川左紀子

    2021.12

     More details

  • ソーシャルロボットによる眠気抑制効果

    原航基, 中澤篤志, 竹本あゆみ

    電子情報通信学会HCGシンポジウム   2021.5

     More details

  • Deep Facial Expression Transfer: StyleGAN-based Facial Motion Transfer

    竹内至生, 中澤篤志

    電子情報通信学会技術研究報告(Web)   120 ( 389(IMQ2020 10-35) )   2021

  • 角膜表面反射画像からのシーン識別

    大嶋佑紀, 前田響介, 枝本祐典, 中澤篤志

    電子情報通信学会技術研究報告(Web)   120 ( 393(BioX2020 40-48) )   2021

  • 自閉スペクトラム症児の母子相互作用のマルチモーダル分析 快情動の表出と顔向けの共起関係

    長岡千賀, 松島佳苗, アン ミー, 伊藤凌太朗, 加藤寿宏, 吉川左紀子, 中澤篤志, 本田美和子, ジネスト イヴ, 安藤夏子, 岩元美由紀

    日本心理学会大会発表抄録集   85th   2021

  • Imitation of Human Eyeblinks and Nodding Using an Android Toward Attentive Listening

    湯口彰重, 高松淳, 中澤篤志, 小笠原司

    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM)   2020   2020

  • Correlation analysis of personality by using home life behavior measurements

    沼田崇志, 工藤泰幸, 加藤猛, 金子迪大, 野村理朗, 森口佑介, 中澤篤志, 嶺竜治

    情報処理学会研究報告(Web)   2020 ( HCI-190 )   2020

  • より良い聞き手のためのアンドロイドによる瞬目と頷きの模倣効果の検討

    湯口彰重, GARCIA RICARDEZ Gustavo Alfonso, 高松淳, 中澤篤志, 小笠原司

    日本ロボット学会学術講演会予稿集(CD-ROM)   38th   2020

  • 人工知能によって「人を見る」~人の目を測る・人の行動を測る・介護の上手さを測る~

    中澤篤志

    学習院大学計算機センター年報   40   120 - 129   2020

     More details

    Language:Japanese   Publisher:学習院大学計算機センター  

    J-GLOBAL

    researchmap

  • ロボットによる見る・触れる動作の模倣とそれを通じた評価

    高松淳, 豊島健太, 佐野哲也, 湯口彰重, 中澤篤志, ALFONSO Garcia Ricardez Gustavo, 丁明, 小笠原司

    日本ロボット学会学術講演会予稿集(CD-ROM)   37th   2019

  • ユマニチュードを行動から,脳から知りたい,

    中澤 篤志

    医学界新聞   3154号   13   2018

     More details

  • 人の視線計測に基づく会話時の視線提示が可能なアンドロイドシステム

    佐野哲也, 湯口彰重, 中澤篤志, GARCIA Gustavo, 高松淳, 小笠原司

    計測自動制御学会システムインテグレーション部門講演会(CD-ROM)   19th   2018

  • PRMU基礎研究におけるオープンアイデア~PRMU第二期グランドチャレンジ~

    安倍満, 舩冨卓哉, 木村昭悟, 中澤篤志, 山崎俊彦, 松下康之, 内田誠一, 前田英作

    電子情報通信学会技術研究報告   117 ( 362(PRMU2017 101-111) )   39   2017.12

     More details

    Language:Japanese  

    J-GLOBAL

    researchmap

  • Estimation of task difficulty and habituation effect while visual manipulation using pupillary response Reviewed

    Asami Matsumoto, Yuta Tange, Atsushi Nakazawa, Toyoaki Nishida

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   10165   24 - 35   2017

     More details

    Language:English   Publishing type:Article, review, commentary, editorial, etc. (international conference proceedings)   Publisher:Springer Verlag  

    In this paper, we show the relationship between pupil dilation and visualmanipulation tasks to measure the magnitude of individual habituation effect and task difficulty. Our findings show that pupil dilation can be used as a new physiological signal in the application of audience measurement, affective computing, affective communications, and user interface design. We built a pointer maze game where a subject moves a pointer from start to end positions on a straight pathway, and we observe the subject’s pupil size while changing the pathway width and performing the game repeatedly. Through the two experiments, we found the maximum pupil size increases during the game when the pathway narrows. The first experiment indicates the difficulty of the task (narrower pathway) is related to the larger pupil diameter. On the basis of these results, we built models relating to (1) pupil size and pathway width, (2) pupil size and duration, and (3) pathway width and duration. The second experiment indicates the pupil constriction is related to the habituation effect of the users. While a similar effect has already been reported, the magnitude of pupil dilation during our task was about ten times as high as that in other tasks, so our confidence in the model is high.

    DOI: 10.1007/978-3-319-56687-0_3

    Scopus

    researchmap

  • Evaluation of face-to-face communication skills for people with dementia using a head-mounted system Reviewed

    Nakazawa, A, Okino Y, Honda M

    3rd International Workshop on Pattern Recognition for Healthcare Analytics   2016.12

     More details

    Language:English   Publishing type:Article, review, commentary, editorial, etc. (international conference proceedings)  

    researchmap

  • Point of Gaze Estimation Using Corneal Surface Reflection and 360° Spherical Image

    116 ( 208 )   75 - 81   2016.9

     More details

    Language:Japanese  

    CiNii Article

    researchmap

  • Evaluation of Care Skills using a Head-mounted Camera

    116 ( 39 )   95 - 100   2016.5

     More details

  • Heat map visualization of multi-slice medical images through correspondence matching of video frames Reviewed

    Divesh Lala, Atsushi Nakazawa

    2016 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS (ETRA 2016)   119 - 122   2016

     More details

    Language:English   Publishing type:Research paper, summary (international conference)   Publisher:ASSOC COMPUTING MACHINERY  

    Visual inspection of medical imagery such as MRI and CT scans is a major task for medical professionals who must diagnose and treat patients without error. Given this goal, visualizing search behavior patterns used to recognize abnormalities in these images is of interest. In this paper we describe the development of a system which automatically generates multiple image-dependent heat maps from eye gaze data of users viewing medical image slices. This system only requires the use of a non-wearable eye gaze tracker and video capturing system. The main automated features are the identification of a medical image slice located inside a video frame and calculation of the correspondence between display screen and raw image eye gaze locations. We propose that the system can be used for eye gaze analysis and diagnostic training in the medical field.

    DOI: 10.1145/2857491.2857504

    Web of Science

    researchmap

  • Noise stable image registration using Random Resample Consensus Reviewed

    Atsushi Nakazawa

    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)   853 - 858   2016

     More details

    Language:English   Publishing type:Article, review, commentary, editorial, etc. (international conference proceedings)   Publisher:IEEE COMPUTER SOC  

    Image registration is an important and fundamental problem in computer vision and image processing. Although there are currently a large number of image registration algorithms such as RANSAC and its extensions, image registration under very noisy conditions remains difficult when it cannot obtain enough number of correct corresponding points. This paper solves this issue by introducing a random resample consensus (RANRESAC) strategy, which achieves robust registration where it is difficult to obtain enough numbers of correct correspondence pairs. In contrast to RANSAC, proposed RANRESAC newly generate corresponding points for the images using the hypothesis transformation function, and verifies the correctness by evaluating the similarity of the local features at the newly sampled points. To confirm the effectiveness for the proposed method, we first conducted an preliminary experiment that evaluates the similarity of texture and orientation components of SURF local descriptor in the images adding several levels of noise. As the result, we observed the texture component is more stable than the orientation component. Based on this finding, we design the RANRESAC algorithm and performed experiments using a open image registration dataset. As the result, proposed method outperforms to the RANSAC, MSAC and Optimal RANSAC algorithms in large noise conditions.

    DOI: 10.1109/ICPR.2016.7899742

    Web of Science

    researchmap

  • The technique of cornal reflection analysis Foundations and Applications

    中澤篤志

    電子情報通信学会技術研究報告   116 ( 265(NC2016 16-31) )   2016

  • 一人称視点映像による実環境の記憶可能性推定

    Kento OIZUMI, Atsushi NAKAZAWA, Toyoaki Nishida

    Meeting on Image Recognition and Understanding   2015.7

     More details

    Language:Japanese   Publishing type:Meeting report  

    researchmap

  • A mobile corneal imaging caemra for estimation of human's view

    NAKAZAWA Atsushi

    2015.7

     More details

    Language:Japanese   Publishing type:Article, review, commentary, editorial, etc. (international conference proceedings)  

    researchmap

  • Affective Computing - Provide Emotions to Computer Systems

    Atsushi Nakazawa

    会誌 自動車技術会   2015.3 ( 3 )   31 - 34   2015

     More details

  • A Mobile Corneal Imaging Camera for Estimation of Human's View

    Atsushi Nakazawa

    Meeting on Image Recognition and Understanding   2015

     More details

    Language:English  

    researchmap

  • 視覚操作タスクでの集中度と瞳孔径変化の定量的関係

    Yuta Tange, Asami Matsumoto, Atsushi Nakazawa, Toyoaki Nishida

    Meeting on Image Recognition and Understanding   2015

     More details

    Language:Japanese  

    researchmap

  • NON-CALIBRATED AND REAL-TIME HUMAN VIEW ESTIMATION USING A MOBILE CORNEAL IMAGING CAMERA Reviewed

    Atsushi Nakazawa, Christian Nitschke, Toyoaki Nishida

    2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)   1 - 6   2015

     More details

    Language:English   Publishing type:Research paper, summary (international conference)   Publisher:IEEE  

    We present a mobile human view estimation system using a corneal imaging technique. Compared to the current eye gaze tracking (EGT) systems, our system does not require per-session calibrations and a frontal view (scene) camera, making it suitable for wearable glass systems because it is easier to use and more socially acceptable due to the lack of a frontal scene camera. Our glasses system consists of a glass frame and a micro eye camera that captures the eye (corneal) reflections of a user. 3D corneal pose tracking is performed for the captured images by using a particle filter-based real-time tracking method leveraged by a 3D eye model and weak perspective projection. We then compute the gaze reflection point (GRP) where the light from the point of gaze (PoG) is reflected, enabling us to identify where a user is looking in a scene image reflected on the corneal surface. We conducted experiments using a standard computer display setup and several real-world scenes, and found that the proposed method performs with considerable accuracy under non-calibrated setups. This demonstrates its potential for various purposes such as the user interface of a glasses systems and the analysis of human perceptions in actual scenes for marketing, environmental design, and quality-of-life applications.

    Web of Science

    researchmap

  • 角膜イメージング法による視覚推定とその将来展望

    Atsushi Nakazawa

    学術の動向   2015.9   89 - 91   2015

     More details

    Language:Japanese  

    researchmap

  • NON-CALIBRATED AND REAL-TIME HUMAN VIEW ESTIMATION USING A MOBILE CORNEAL IMAGING CAMERA Invited

    Atsushi Nakazawa, Christian Nitschke, Toyoaki Nishida

    2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)   1 - 4   2015

     More details

    Language:English   Publishing type:Research paper, summary (international conference)   Publisher:IEEE  

    We present a mobile human view estimation system using a corneal imaging technique. Compared to the current eye gaze tracking (EGT) systems, our system does not require per-session calibrations and a frontal view (scene) camera, making it suitable for wearable glass systems because it is easier to use and more socially acceptable due to the lack of a frontal scene camera. Our glasses system consists of a glass frame and a micro eye camera that captures the eye (corneal) reflections of a user. 3D corneal pose tracking is performed for the captured images by using a particle filter-based real-time tracking method leveraged by a 3D eye model and weak perspective projection. We then compute the gaze reflection point (GRP) where the light from the point of gaze (PoG) is reflected, enabling us to identify where a user is looking in a scene image reflected on the corneal surface. We conducted experiments using a standard computer display setup and several real-world scenes, and found that the proposed method performs with considerable accuracy under non-calibrated setups. This demonstrates its potential for various purposes such as the user interface of a glasses systems and the analysis of human perceptions in actual scenes for marketing, environmental design, and quality-of-life applications.

    Web of Science

    researchmap

  • Synthetic Evidential Study as Primordial Soup of Conversation. Reviewed

    Nishida, T, Nakazawa, A, Ohmoto, Y, Nitschke, C, Mohammad, Y, Thovuttikul, S, Lala, D, Abe, M, Ookaki, T

    Databases in Networked Information Systems   74 - 83   2015

     More details

    Language:English  

    researchmap

  • Non-calibrated and real-time human view estimation using a mobile corneal imaging camera Invited

    NAKAZAWA Atsushi

    Japan-Korea Workshop on Information and Robot Technology for Daily Life Support   1 - 4   2014.9

     More details

    Language:English   Publishing type:Research paper, summary (international conference)  

    researchmap

  • Detecton of Gaze Target Objects Using Active Markers

    Hiroaki Kato, Atsushi Nakazawa, Toyoaki Nishida

    IEICE Technical Report, MVE   2014

     More details

    Language:Japanese  

    researchmap

  • 角膜イメージング法によるリモート注視点推定システム

    Yusuke OKINO, Kento OIZUMI, Hiroaki KATO, Atsushi NAKAZAWA, Toyoaki Nishida

    IEICE Technical Report, MVE   2014

     More details

    Language:Japanese  

    researchmap

  • The Corneal Imaging Technique ? its Foundations and Applications

    Atsushi Nakazawa

    JasFOS Symposium   2014

     More details

    Language:English  

    researchmap

  • Robust registration of eye reection and scene images using random resample consensus

    Atsushi Nakazawa, Christian Nitschke, Toyoaki Nishida

    Meeting on Image Recognition and Understanding   2014

     More details

    Language:English  

    researchmap

  • 角膜イメージング法の基礎理論と応用 注視点・周辺視検出からシーンの高解像度復元まで

    中澤 篤志

    Japan Society of Precision Engineering, Technical Committee on Instrial Application of Image Processing   2014

     More details

    Language:Japanese  

    researchmap

  • 角膜上の画像を利用し注視点を検出

    Atsushi Nakazawa

    Nikkei Electronics   2014.8 ( 1140 )   59 - 68   2014

  • Conversational informatics: A data-intensive approach with emphasis on nonverbal communication

    Toyoaki Nishida, Atsushi Nakazawa, Yoshimasa Ohmoto, Yasser Mohammad

    Springer 4   9784431550402   1 - 344   2014

     More details

    Language:English   Publisher:TUT Press  

    This book covers an approach to conversational informatics which encompasses science and technology for understanding and augmenting conversation in the network age. A major challenge in engineering is to develop a technology for conveying not just messages but also underlying wisdom. Relevant theories and practices in cognitive linguistics and communication science, as well as techniques developed in computational linguistics and artificial intelligence, are discussed.

    DOI: 10.1007/978-4-431-55040-2_1

    Scopus

    researchmap

  • Corneal Imaging Revisited : An Overview of Corneal Reflection Analysis and Applications (PSJ Transactions on Computer Vision and Applications Vol.5)

    2012 ( 2 )   1 - 18   2013.4

     More details

  • Current and Future Gaming Technology : from core technologies and business models to next generation interfaces

    67 ( 1 )   1 - 4   2013.1

     More details

  • 角膜イメージング法を用いた校正無し注視点推定

    中澤篤志, 中澤篤志, ニチュケ クリスティアン

    日本ロボット学会学術講演会予稿集(CD-ROM)   31st   2013

  • I see what you see: Point of Gaze Estimation from Corneal Images

    Christian Nitschke, Atsushi Nakazawa, Toyoaki Nishida

    2013 SECOND IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR 2013)   298 - 304   2013

     More details

    Language:English   Publisher:IEEE COMPUTER SOC  

    Eye-gaze tracking (EGT) is an important problem with a long history and various applications. However, state-of-the-art geometric vision-based techniques still suffer from major limitations, especially (1) the requirement for calibration of a static relationship between eye camera and scene, and (2) a parallax error that occurs when the depth of the scene varies. This paper introduces a novel concept for EGT that overcomes these limitations using corneal imaging. Based on the observation that the cornea reflects the surrounding scene over a wide field of view, it is shown how to extract that information and determine the point of gaze (PoG) directly in an eye image. To realize this, a closed-form solution is developed to obtain the gaze-reflection point (GRP), where light from the PoG reflects at the corneal surface into a camera. This includes compensation for the individual offset between optical and visual axis. Quantitative and qualitative evaluation shows that the strategy achieves considerable accuracy and successfully supports depth-varying environments. The novel approach provides important practical advantages, including reduced intrusiveness and complexity, and support for flexible dynamic setups, non-planar scenes and outdoor application.

    DOI: 10.1109/ACPR.2013.84

    Web of Science

    researchmap

  • Human Body-parts Tracking for Fine-grained Behavior Classification

    Norimichi Ukita, Atsushi Nakazawa

    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW)   777 - 778   2013

     More details

    Language:English   Publisher:IEEE  

    This paper discusses the usefulness of human body-parts tracking for acquiring subtle cues in social interactions. While many kinds of body-parts tracking algorithms have been proposed, we focus on particle filtering-based tracking using prior models, which have several advantages for researches on social interactions. As a first step for extracting subtle cues from videos of social interaction behaviors, the advantages, disadvantages, and prospective properties of the body-parts tracking using prior models are summarized with actual results.

    DOI: 10.1109/ICCVW.2013.106

    Web of Science

    researchmap

  • Point of Gaze Estimation through Corneal Surface Reflection in an Active Illumination Environment

    Atsushi Nakazawa, Christian Nitschke

    Image sensing symposium   2013

     More details

    Language:Japanese   Publishing type:Research paper, summary (national, other academic conference)  

    DOI: 10.1007/978-3-642-33709-3_12

    researchmap

  • Corneal Reflection Analysis for Point of Gaze Estimation and Other Applications

    Atsushi Nakazawa, Christian Nitschke

    The 7th International Workshop on Robust Computer Vision   2013

     More details

    Language:English  

    researchmap

  • Arm Pose Copying for Humanoid Robots

    Yasser Mohanamad, Toyoaki Nishida, Atsushi Nakazawa

    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO)   897 - 904   2013

     More details

    Language:English   Publisher:IEEE  

    Learning by imitation is becoming increasingly important for teaching humanoid robots new skills. The simplest form of imitation is behavior copying in which the robot is minimizing the difference between its perceived motion and that of the imitated agent. One problem that must be solved even in this simplest of all imitation tasks is calculating the learner's pose corresponding to the perceived pose of the agent it is imitating. This paper presents a general framework for solving this problem in closed form for the arms of a generalized humanoid robot of which most available humanoids are special cases. The paper also reports the evaluation of the proposed system for real and simulated robots.

    DOI: 10.1109/ROBIO.2013.6739576

    Web of Science

    researchmap

  • Virtual Dance Hall : Dance Interaction using Body Movement

    YASUNAGA Takuya, NAKAZAWA Atsushi, TAKEMURA Haruo

    Technical report of IEICE. Multimedia and virtual environment   112 ( 221 )   61 - 66   2012.9

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    This paper presents a new approach for synthesizing dance performance matched to input music, based on the emotional aspects of dance performance. Our analysis method extracts motion rhythm and intensity from motion capture data and musical rhythm, structure from musical signals. We extract candidates of motion segment sets whose features are matched to those of music segments. For synthesizing dance performance, we select the motion segment set whose intensity by Kinect and connectivity is matched to that of music segments. As the result, our system adds an interactive component to live dancing performed by virtual characters.

    CiNii Article

    CiNii Books

    researchmap

  • Super resolution scene reconstruction using corneal reflections

    NITSCHKE Christian, NAKAZAWA Atsushi

    Technical report of IEICE. PRMU   112 ( 225 )   65 - 70   2012.9

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    The corneal imaging technique enables extraction of scene information from corneal reflections and realizes a large number of applications including environment map reconstruction and estimation of a person's area of view. However, since corneal reflection images are usually low quality and resolution, the outcome of the technique is currently limited. To overcome this issue, we propose a first non-central catadioptric approach to reconstruct high-resolution scene information from a series of lower resolution corneal images through a super-resolution technique. We describe a three-step process, including (1) single image environment map recovery, (2) multiple image registration, and (3) high-resolution image reconstruction. In a number of experiments we show that the proposed strategy successfully recovers high-frequency textures that are lost in the source images, and also works with other non-central catadioptric systems, e.g., involving spherical mirrors.

    CiNii Article

    CiNii Books

    researchmap

  • Human-Computer Dance Interaction with Real-time Control using Wiimote and Kinect

    YASUNAGA Takuya, NAKAZAWA Atsushi, TAKEMURA Haruo

    Technical report of IEICE. PRMU   112 ( 225 )   23 - 28   2012.9

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    This paper presents a new approach for synthesizing dance performance matched to input music, based on the emotional aspects of dance performance. Since the sensor reads express intensities of users' body movements, the system can synthesize character motions whose intensities are synchronized to those of users. Our analysis method extracts motion rhythm and intensity from motion capture data and musical rhythm, structure from musical signals. With the results of this analysis, we generate a motion graph that can generate character motions matched to the musical rhythm. For synthesizing dance performance, we select the motion segment set whose intensity by sensor data and connectivity is matched to that of music segments. As the results, our system adds an interactive component to live dancing performed by virtual characters.

    CiNii Article

    CiNii Books

    researchmap

  • 大阪大学における新入生の普通教科「情報」に関する調査報告

    間下以大, 清川清, 中澤篤志, 竹村治雄

    情報教育シンポジウム2012論文集   2012 ( 4 )   29 - 34   2012.8

     More details

    Language:Japanese  

    CiNii Article

    researchmap

  • Educational Computer System in Osaka University

    2012 ( 8 )   1 - 5   2012.5

     More details

    Language:Japanese  

    CiNii Article

    researchmap

  • A Instrumented Puppet Interface for Retrieval of Motion Capture Data

    NUMAGUCH Naoki, NAKAZAWA Atsushi, SHIRATORI Takaaki, HODGINS Jessica

    Technical report of IEICE. PRMU   111 ( 430 )   41 - 46   2012.2

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    Intuitive and efficient retrieval of motion capture data is essential for effective use of motion capture databases. In this paper, we describe a system that allows the user to retrieve a particular sequence by performing an approximation of the motion with an instrumented puppet. This interface is intuitive because both adults and children have experience playacting with puppets. We develop a novel similarity metric called dual subspace projection method (DSPM) which works even for relatively large databases. We conducted a user study of the system and subjects could find the desired motion with reasonable accuracy from a database consisting of everyday, exercise, and acrobatic behaviors.

    CiNii Article

    CiNii Books

    researchmap

  • Dance to Music Character Animation with accelerometer based user control

    Yasunaga Takuya, Nakazawa Atsushi, Takemura Haruo

    IPSJ SIG Notes   2012 ( 25 )   1 - 6   2012.1

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    This paper presents a new approach for synthesizing dance performance matched to input music, based on the emotional aspects of dance performance. Our analysis method extracts motion rhythm and inten- sity from motion capture data and musical rhythm, structure from musical signals. We rst extract candidates of motion segment sets whose features are matched to those of music segments. For synthesizing dance perfor- mance, we select the motion segment set whose intensity by accelerometer and connectivity is matched to that of music segments. We evaluate all synthesizing time and matching value.

    CiNii Article

    CiNii Books

    researchmap

  • Human-computer dance interaction with realtime accelerometer control Reviewed

    Takuya Yasunaga, Atsushi Nakazawa, Haruo Takemura

    MM 2012 - Proceedings of the 20th ACM International Conference on Multimedia   1157 - 1160   2012

     More details

    Language:English  

    Motion-capture-based character animations are widely used in computer graphics and interactive games.In this paper, we show a novel approach to create dancing character animations that react to input music and an accelerometer manipulated by a user. Since the sensor reads express intensities of users' body movements, the system can synthesize character motions whose intensities are synchronized to those of users. Our system consists of analysis phase and synthesis phase. In the analysis phase, the musical beat and segments are detected from input sound, and motion rhythm and intensities are found from motion capture data. With the results of this analysis, we generate a motion graph that can generate character motions matched to the musical rhythm. In synthesis phase, the system receives the output data from an accelerometer and traverses the motion graph according to the matching result between the sensor data and the motion intensity. As the result, our system adds an interactive component to live dancing performed by virtual characters. © 2012 ACM.

    DOI: 10.1145/2393347.2396407

    Scopus

    researchmap

  • 大阪大学の情報教育システムに関する現状と展望

    中澤 篤志, 間下 以大, 清川 清

    大学ICT推進協議会年次大会論文集   5p   2012

     More details

    Language:Japanese   Publisher:[大学ICT推進協議会]  

    CiNii Article

    researchmap

  • Super-Resolution from Corneal Images

    Christian Nitschke, Atsushi Nakazawa

    PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2012   2012

     More details

    Language:English   Publisher:B M V A PRESS  

    The corneal imaging technique enables extraction of scene information from corneal reflections and realizes a large number of applications including environment map reconstruction and estimation of a person's area of view. However, since corneal reflection images are usually low quality and resolution, the outcome of the technique is currently limited. To overcome this issue, we propose a first non-central catadioptric approach to reconstruct high-resolution scene information from a series of lower resolution corneal images through a super-resolution technique. We describe a three-step process, including (1) single image environment map recovery, (2) multiple image registration, and (3) high-resolution image reconstruction. In a number of experiments we show that the proposed strategy successfully recovers high-frequency textures that are lost in the source images, and also works with other non-central catadioptric systems, e.g., involving spherical mirrors. The obtained information about a person and the environment enables novel applications, e.g., for surveillance systems, personal video, human-computer interaction, and upcoming head-mounted cameras (Google Glass [5]).

    DOI: 10.5244/C.26.22

    Web of Science

    researchmap

  • Virtual Dance Hall : Dance Interaction using Body Movement

    14   61 - 66   2012

     More details

    Language:Japanese  

    CiNii Article

    researchmap

  • A Instrumented Puppet Interface for Retrieval of Motion Capture Data

    2011 ( 1 )   1 - 8   2011.11

     More details

  • 眼球の表面反射と高速アクティブ光投影を用いた非装着・事前校正不要な注視点推定

    中澤篤志, ニチュケクリスチャン, ラドコフアレクサンダー, 竹村治雄

    画像の認識・理解シンポジウム(MIRU2011)論文集   2011   41 - 48   2011.7

     More details

    Language:Japanese  

    CiNii Article

    researchmap

  • Image-based Eye Pose and Reflection Analysis for Advanced Interaction Techniques and Scene Understanding

    Christian Nitschke, Atsushi Nakazawa, Haruo Takemura

    研究報告コンピュータビジョンとイメージメディア(CVIM)   2011 ( 31 )   1 - 16   2011.5

     More details

    Language:English  

    Recently, the geometric relation between a human eye and its image has been formalized to analyze environmental light reflections in the cornea. Proceeding with these efforts, our study proposes a theory of the light transport at the corneal surface including multiple eye poses, develops novel applications, and performs comprehensive experimental evaluation. Based on anthropometric data, a spherical-curvature geometric eye model is developed, and subsequently applied to discuss methods for eye pose estimation from projected circular eye features. The combination of camera and corneal mirror acts as a catadioptric imaging system, for which we describe the back projection to reconstruct the position of a light source, and the forward projection of light from a given source into the image. The theory has several practical applications in scene reconstruction and human-computer interaction, where we discuss the geometric calibration of display-camera setups as one particular problem. We propose a novel approach that eliminates the requirement of special hardware and tedious user interaction by analyzing screen reflections in the cornea. Based on this setting, thorough experimental evaluation shows that simple scene reconstruction results in a large error. We discuss possible reasons and introduce an optimization scheme that achieves feasible results by exploiting geometry constraints within the system. Our study provides sophisticated strategies for analyzing the geometric relation between camera, eye pose, corneal shape, and scene structure within arbitrary dynamic environments. The findings and developments enable novel insights, understanding, and applications in the analysis of human-scene interaction. We believe that this work has implications on several fields and is an important contribution.Recently, the geometric relation between a human eye and its image has been formalized to analyze environmental light reflections in the cornea. Proceeding with these efforts, our study proposes a theory of the light transport at the corneal surface including multiple eye poses, develops novel applications, and performs comprehensive experimental evaluation. Based on anthropometric data, a spherical-curvature geometric eye model is developed, and subsequently applied to discuss methods for eye pose estimation from projected circular eye features. The combination of camera and corneal mirror acts as a catadioptric imaging system, for which we describe the back projection to reconstruct the position of a light source, and the forward projection of light from a given source into the image. The theory has several practical applications in scene reconstruction and human-computer interaction, where we discuss the geometric calibration of display-camera setups as one particular problem. We propose a novel approach that eliminates the requirement of special hardware and tedious user interaction by analyzing screen reflections in the cornea. Based on this setting, thorough experimental evaluation shows that simple scene reconstruction results in a large error. We discuss possible reasons and introduce an optimization scheme that achieves feasible results by exploiting geometry constraints within the system. Our study provides sophisticated strategies for analyzing the geometric relation between camera, eye pose, corneal shape, and scene structure within arbitrary dynamic environments. The findings and developments enable novel insights, understanding, and applications in the analysis of human-scene interaction. We believe that this work has implications on several fields and is an important contribution.

    CiNii Article

    CiNii Books

    researchmap

  • Practical Display-Camera Calibration from Eye Reflections using Coded Illumination Reviewed

    Christian Nitschke, Atsushi Nakazawa, Haruo Takemura

    2011 FIRST ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR)   550 - 554   2011

     More details

    Language:English   Publisher:IEEE COMPUTER SOC  

    Display-camera systems enable a large range of vision applications in everyday environments, but require calibrating the pose of the display with respect to the camera. Recently, a promising approach is introduced that analyzes corneal reflections from face images. While having several benefits, the described implementation requires controlled and unnatural conditions, manual elaboration, and offline processing. This work aims to improve the approach and allow for conditions of practice: First, the novel idea is proposed to encode display correspondences into illumination patterns to enable automatic and robust detection. This comprises the first use of coded illumination with corneal reflection analysis. Second, since previous evaluation revealed a large error for the basic algorithm, this work discusses thorough experimental evaluation of an optimization strategy based on geometry constraints in the scene. Results show that considerable improvement can be achieved under common parameter variation, which successfully verifies the feasibility of the approach.

    DOI: 10.1109/ACPR.2011.6166661

    Web of Science

    researchmap

  • A puppet interface for retrieval of motion capture data Reviewed

    Naoki Numaguchi, Atsushi Nakazawa, Takaaki Shiratori, Jessica K. Hodgins

    Proceedings - SCA 2011: ACM SIGGRAPH / Eurographics Symposium on Computer Animation   157 - 166   2011

     More details

    Language:English  

    Intuitive and efficient retrieval of motion capture data is essential for effective use of motion capture databases. In this paper, we describe a system that allows the user to retrieve a particular sequence by performing an approximation of the motion with an instrumented puppet. This interface is intuitive because both adults and children have experience playacting with puppets and toys to express particular behaviors or to tell stories with style and emotion. The puppet has 17 degrees of freedom and can therefore represent a variety of motions. We develop a novel similarity metric between puppet and human motion by computing the reconstruction errors of the puppet motion in the latent space of the human motion and those of the human motion in the latent space of the puppet motion. This metric works even for relatively large databases. We conducted a user study of the system and subjects could find the desired motion with reasonable accuracy from a database consisting of everyday, exercise, and acrobatic behaviors. Copyright © 2011 by the Association for Computing Machinery, Inc.

    DOI: 10.1145/2019406.2019427

    Scopus

    researchmap

  • Display-Camera Calibration from Eye Reflections

    NITSCHKE Christian, NAKAZAWA Atsushi, TAKEMURA Haruo

    The IEICE transactions on information and systems (Japanese edetion)   93 ( 8 )   1450 - 1460   2010.8

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    平面ディスプレイを光源とみなし,光源パターンを様々に変化させた対象の画像をカメラで撮影し形状復元やHCIに利用する,ディスプレイ・カメラシステムの研究が行われている.このような用途ではディスプレイとカメラの幾何校正を行うことが必要であり,従来法では球面ミラー等の付加的なデバイスを必要とした.本論文ではこれに対し,ユーザの眼球の表面反射を利用することで,特殊なデバイスを必要とせず校正を実現する手法を提案する.本手法ではディスプレイでパターンを投影し,その反射した像をカメラで撮影することでディスプレイ座標とカメラ座標の対応関係を得る.次に,この対応関係と眼球モデル,及びディスプレイサイズの情報を用いて最適化を行うことで解を得る.11名の被験者を用いた実験及びディスプレイとカメラを様々な位置関係で配置した状況に対する実験を行い,本手法の有効性及び様々な状況下での性能を明らかにした.

    CiNii Article

    CiNii Books

    researchmap

  • MMM-classification of 3D Range Data

    AGRAWAL Anuraag, NAKAZAWA Atsushi, TAKEMURA Haruo

    IEICE technical report   109 ( 470 )   193 - 198   2010.3

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    This paper presents a method for accurately segmenting and classifying 3D range data into particular object classes. Object classification of input images is necessary for applications including robot navigation and automation, in particular with respect to path planning. To achieve robust object classification, we propose the idea of an object feature which represents a distribution of neighboring points around a target point. In addition, rather than processing raw points, we reconstruct polygons from the point data, introducing connectivity to the points. With these ideas, we can refine the Markov Random Field (MRF) calculation with more relevant information with regards to determining "related points". The algorithm was tested against five outdoor scenes and provided accurate classification even in the presence of many classes of interest.

    CiNii Article

    CiNii Books

    researchmap

  • Display-Camera Calibration from Eye Reflections

    NITSCHKE Christian, NAKAZAWA Atsushi, TAKEMURA Haruo

    Technical report of IEICE. PRMU   109 ( 470 )   205 - 210   2010.3

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    We present a novel technique for calibrating display-camera systems from reflections in the user's eyes. Display-camera systems enable a range of vision applications that need controlled illumination, including 3D object reconstruction, facial modeling and human computer interaction. One important issue, though, is the geometric calibration of the display, which requires additional hardware and tedious user interaction. The proposed approach eliminates this requirement by analyzing patterns that are reflected in the cornea, a mirroring device that naturally exists in any display-camera system. We introduce an optimization strategy that is able to refine eye and spherical mirror calibration results. When applied to the eye, it even outperforms spherical mirror calibration unoptimized. Furthermore, we obtain a robust estimation of eye poses which can be used for eye tracking applications. Despite the difficult working conditions, the calibration results are good and should be sufficient for many applications.

    CiNii Article

    CiNii Books

    researchmap

  • D-11-78 DISASTER RECOGNITION IN 3D RANGE DATA

    Kawai Kentaro, Agrawal Anuraag, Nakazawa Atushi, Takemura Haruo

    Proceedings of the IEICE General Conference   2010 ( 2 )   78 - 78   2010.3

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • D-12-88 A Hand Cart Type 3D Reconstruction System using a Laser Range Finder and a Single Camera

    Ohnishi Takayuki, Nitschke Christian, Nakazawa Atsushi, Takemura Haruo

    Proceedings of the IEICE General Conference   2010 ( 2 )   199 - 199   2010.3

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • 3次元ビデオの連結によるインタラクティブ3次元VRシステム--3次元ビデオグラフによる3Dビデオの実現

    中澤 篤志

    画像ラボ / 画像ラボ編集委員会 編   20 ( 9 )   12 - 17   2009.9

     More details

    Language:Japanese   Publisher:日本工業出版  

    CiNii Article

    CiNii Books

    researchmap

  • Modeling 3D Urban Environment with Registration between Laser Range Images and Google Maps Image

    MATSUMURA Miki, AGRAWAL Anuraag, NAKAZAWA Atsushi, TAKEMURA Haruo

    IEICE technical report   109 ( 88 )   13 - 18   2009.6

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • Dance to Music Character Animation

    NINOMIYA KEI, NAKAZAWA ATSUSHI, TAKEMURA HARUO

    2009 ( 34 )   1 - 8   2009.6

  • Categorization of motion capture data using emotional words

    NUMAGUCHI NAOKI, NAKAZAWA ATSUSHI, TAKEMURA HARUO

    2009 ( 35 )   1 - 6   2009.6

  • Human video textures Reviewed

    Matthew Flagg, Atsushi Nakazawa, Qiushuang Zhang, Sing Bing Kang, Young Kee Ryu, Irfan Essa, James M. Rehg

    Proceedings of I3D 2009: The 2009 ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games   199 - 206   2009

     More details

    Language:English  

    This paper describes a data-driven approach for generating photorealistic animations of human motion. Each animation sequence follows a user-choreographed path and plays continuously by seamlessly transitioning between different segments of the captured data. To produce these animations, we capitalize on the complementary characteristics of motion capture data and video. We customize our capture system to record motion capture data that are synchronized with our video source. Candidate transition points in video clips are identified using a new similarity metric based on 3-D marker trajectories and their 2-D projections into video. Once the transitions have been identified, a video-based motion graph is constructed. We further exploit hybrid motion and video data to ensure that the transitions are seamless when generating animations. Motion capture marker projections serve as control points for segmentation of layers and nonrigid transformation of regions. This allows warping and blending to generate seamless in-between frames for animation. We show a series of choreographed animations of walks and martial arts scenes as validation of our approach. Copyright © 2009 by the Association for Computing Machinery, Inc.

    DOI: 10.1145/1507149.1507182

    Scopus

    researchmap

  • MMM-classification of 3D Range Data Reviewed

    Anuraag Agrawal, Atsushi Nakazawa, Haruo Takemura

    ICRA: 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-7   2269 - +   2009

     More details

    Language:English   Publisher:IEEE  

    This paper presents a method for accurately segmenting and classifying 3D range data into particular object classes. Object classification of input images is necessary for applications including robot navigation and automation, in particular with respect to path planning. To achieve robust object classification, we propose the idea of an object feature which represents a distribution of neighboring points around a target point. In addition, rather than processing raw points, we reconstruct polygons from the point data, introducing connectivity to the points. With these ideas, we can refine the Markov Random Field (MRF) calculation with more relevant information with regards to determining "related points". The algorithm was tested against five outdoor scenes and provided accurate classification even in the presence of many classes of interest.

    DOI: 10.1109/ROBOT.2009.5152539

    Web of Science

    researchmap

  • Display-Camera Calibration from Eye Reflections Reviewed

    Christian Nitschke, Atsushi Nakazawa, Haruo Takemura

    2009 IEEE 12TH INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)   1226 - 1233   2009

     More details

    Language:English   Publisher:IEEE  

    We present a novel technique for calibrating display-camera systems from reflections in the user's eyes. Display-camera systems enable a range of vision applications that need controlled illumination, including 3D object reconstruction, facial modeling and human computer interaction. One important issue, though, is the geometric calibration of the display, which requires additional hardware and tedious user interaction. The proposed approach eliminates this requirement by analyzing patterns that are reflected in the cornea, a mirroring device that naturally exists in any display-camera system. We introduce an optimization strategy that is able to refine eye and spherical mirror calibration results. When applied to the eye, it even outperforms spherical mirror calibration unoptimized. Furthermore, we obtain a robust estimation of eye poses which can be used for eye tracking applications. Despite the difficult working conditions, the calibration results are good and should be sufficient for many applications.

    DOI: 10.1109/ICCV.2009.5459330

    Web of Science

    researchmap

  • LARGE-SCALE 3D SCENE MODELING BY REGISTRATION OF LASER RANGE DATA WITH GOOGLE MAPS IMAGES Reviewed

    Anuraag Agrawal, Miki Matsumura, Atsushi Nakazawa, Haruo Takemura

    2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6   589 - 592   2009

     More details

    Language:English   Publisher:IEEE  

    This work presents a novel approach to registering multiple range images on top of a Google Maps image. The fundamental concept behind the method is matching completely different types of input with each other using classification as a middleman. Range images and Google Maps images are separated into classes, and the range image is also projected into a 2D top-down template image. The template image can then be matched against the Google Maps image to find its location and orientation on the map, which can be used for registering the range images. An experiment comparing this technique against using GPS to find position and orientation showed that it is effective at automatically constructing a reasonable large-scale 3D model whereas GPS would be completely ineffective.

    DOI: 10.1109/ICIP.2009.5413885

    Web of Science

    researchmap

  • EYE REFLECTION ANALYSIS AND APPLICATION TO DISPLAY-CAMERA CALIBRATION Reviewed

    Christian Nitschke, Atsushi Nakazawa, Haruo Takemura

    2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6   3449 - 3452   2009

     More details

    Language:English   Publisher:IEEE  

    We present a novel technique for calibrating display-camera systems from reflections in the user's eyes. Display-camera systems enable a range of vision applications that need controlled illumination, including 3D object reconstruction, facial modeling and human computer interaction. One important issue, though, is the geometric calibration of the display, which requires additional hardware and tedious user interaction. The proposed approach eliminates this requirement by analyzing patterns that are reflected in the cornea, a mirroring device that naturally exists in any display-camera system. By applying this strategy we also obtain a continuous estimation of eye poses which facilitates further applications. We investigate the effect of display size, camera-eye distance and individual eye anatomy experimentally using only off-the-shelf components. Results are promising and show the general feasibility of the approach.

    DOI: 10.1109/ICIP.2009.5413852

    Web of Science

    researchmap

  • 印象語による舞踊動作データの分類法

    沼口直紀, 中澤篤志, 竹村治雄

    画像電子学会年次大会予稿集(CD-ROM)   37th   2009

  • Interactive 3D Video Using the Sequencing of Multiple Scenes

    HATTORI Yuichi, NAKAZAWA Atsushi, TAKEMURA Haruo

    The IEICE transactions on information and systems   91 ( 12 )   2800 - 2808   2008.12

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • Modeling 3D Scenes with Registration between Laser Range Data and Google Maps Image

    MATSUMURA Miki, AGRAWAL Anuraag, NAKAZAWA Atsushi, TAKEMURA Haruo

    IEICE technical report   108 ( 328 )   169 - 176   2008.11

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    In this research, we propose a method for taking expansive 3D models of environments using highly-accurate laser range data. Laser range data is taken using distance sensors called laser range sensors, which have grown in use over recent years, including such practical applications like Google's street-view. However, trying to model a large-scale environment like a city requires taking data from various locations around the environment, and this data must be aligned with respect to each other. In our method, we initially decide on a location to model, taking range and map data for it and classifying it into multiple regions. We then find the location and orientation on the map that allows the two results to match, giving the location and orientation of the captured range data.

    CiNii Article

    CiNii Books

    researchmap

  • Modeling 3D Scenes with Registration between Laser Range Data and Google Maps Image

    MATSUMURA Miki, AGRAWAL Anuraag, NAKAZAWA Atsushi, TAKEMURA Haruo

    IPSJ SIG Notes. CVIM   2008 ( 115 )   169 - 176   2008.11

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    In this research, we propose a method for taking expansive 3D models of environments using highly-accurate laser range data. Laser range data is taken using distance sensors called laser range sensors, which have grown in use over recent years, including such practical applications like Google's street-view. However, trying to model a large-scale environment like a city requires taking data from various locations around the environment, and this data must be aligned with respect to each other. In our method, we initially decide on a location to model, taking range and map data for it and classifying it into multiple regions. We then find the location and orientation on the map that allows the two results to match, giving the location and orientation of the captured range data.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00051745/

  • Report on CVPR2008

    KAWASAKI Hiroshi, SHIMIZU Masao, TAKAMATSU Jun, TANAKA Masayuki, NAKAZAWA Atsushi, NOBUHARA Shohei, FURUKAWA Ryo, LAO Shihong, YAGI Yasushi

    IEICE technical report   108 ( 328 )   145 - 152   2008.11

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR2008) was held in Ancholage, USA, June 22-28 2006. This is a report on CVPR2008 by nine participants.

    CiNii Article

    CiNii Books

    researchmap

  • Report on CVPR2008

    KAWASAKI Hiroshi, SHIMIZU Masao, TAKAMATSU Jun, TANAKA Masayuki, NAKAZAWA Atsushi, NOBUHARA Shohei, FURUKAWA Ryo, LAO Shihong, YAGI Yasushi

    IPSJ SIG Notes. CVIM   2008 ( 115 )   145 - 152   2008.11

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR2008) was held in Ancholage, USA, June 22-28 2006. This is a report on CVPR2008 by nine participants.

    CiNii Article

    CiNii Books

    researchmap

  • Display-Camera Calibration from Eye Reflections

    NITSCHKE Christian, NAKAZAWA Atsushi, TAKEMURA Haruo

    Technical report of IEICE. PRMU   108 ( 198 )   113 - 120   2008.8

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    We present a technique for calibrating display-camera systems from corneal reflections in the user's eyes. Display-camera systems enable a range of vision applications that need controlled illumination including 3D object reconstruction, facial modeling or human computer interaction in everyday environments. An important issue is calibrating the pose of the display with respect to the camera. Such a calibration may be achieved using a planar mirror with attached calibration pattern or a spherical mirror of known size. However, all approaches require additional hardware and user interaction. We propose an automatic way to recover display properties from patterns that are reflected in the cornea, a mirroring device that naturally coexists in any display-camera system. By applying this strategy we also obtain a continuous estimation of eye pose which may be used to calibrate eye tracking systems and generally enhance human-computer interaction.

    CiNii Article

    CiNii Books

    researchmap

  • Human pose estimation using volumetric features and boosting approach

    TAIRA Ryosuke, NAKAZAWA Atsushi, TAKEMURA Haruo

    IPSJ SIG Notes. CVIM   2008 ( 82 )   143 - 148   2008.8

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    Recently, considerable research has been conducted on markerless human-body motion capture using volume data. Most studies have used articulated body models that consist of primitives such as cylinders or ellipsoids. However, the methods that use such models require very good initial parameters. This paper proposes a method to find the human posture parameter from just a single frame without any knowledge of previous frames. We first prepare sample volume data which takes various postures, and then cluster them according to the direction of body links. The volumetric feature vector is acquired for each sample and they are learned via the AdaBoost algorithm. The resulting classifier is used for estimating input volume data, and the matched class can be used as an initial parameter for tracking based methods.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00051779/

  • Display-Camera Calibration from Eye Reflections

    NITSCHKE Christian, NAKAZAWA Atsushi, TAKEMURA Haruo

    IPSJ SIG Notes. CVIM   164 ( 82 )   115 - 122   2008.8

     More details

    Language:English   Publisher:Information Processing Society of Japan (IPSJ)  

    We present a technique for calibrating display-camera systems from corneal reflections in the user's eyes. Display-camera systems enable a range of vision applications that need controlled illumination including 3D object reconstruction, facial modeling or human computer interaction in everyday environments. An important issue is calibrating the pose of the display with respect to the camera. Such a calibration may be achieved using a planar mirror with attached calibration pattern or a spherical mirror of known size. However, all approaches require additional hardware and user interaction. We propose an automatic way to recover display properties from patterns that are reflected in the cornea, a mirroring device that naturally coexists in any display-camera system. By applying this strategy we also obtain a continuous estimation of eye pose which may be used to calibrate eye tracking systems and generally enhance human-computer interaction.

    CiNii Article

    CiNii Books

    researchmap

  • Human pose estimation using volumetric features and boosting approach

    TAIRA Ryosuke, NAKAZAWA Atsushi, TAKEMURA Haruo

    IEICE technical report   108 ( 198 )   141 - 146   2008.8

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    Recently, considerable research has been conducted on markerless human-body motion capture using volume data. Most studies have used articulated body models that consist of primitives such as cylinders or ellipsoids. However, the methods that use such models require very good initial parameters. This paper proposes a method to find the human posture parameter from just a single frame without any knowledge of previous frames. We first prepare sample volume data which takes various postures, and then cluster them according to the direction of body links. The volumetric feature vector is acquired for each sample and they are learned via the AdaBoost algorithm. The resulting classifier is used for estimating input volume data, and the matched class can be used as an initial parameter for tracking based methods.

    CiNii Article

    CiNii Books

    researchmap

  • Example Based Approach for Human Pose Estimation Using Volume Data and Graph Matching

    TANAKA Hidenori, NAKAZAWA Atsushi, TAKEMURA Haruo

    The IEICE transactions on information and systems   91 ( 6 )   1580 - 1591   2008.6

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • Using the Sequencing of 3D-Video to Create a Movie with an Arbitrary Viewpoint that can be Played Indefinitely

    HATTORI YUICHI, NAKAZAWA ATSUSHI, TAKEMURA HARUO

    IPSJ SIG Notes. CVIM   2008 ( 3 )   41 - 48   2008.1

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    3D video with an arbitrary viewpoint, generated by 3D reconstruction of images captured by multiple cameras, is currently actively researched as a new type of media. In our research, we propose a method to connect different 3D video clips in an attempt to realize interactive 3D video. Conversion of time sequence 3D mesh shapes into skeleton data enables us to find frames that can connect each clip, determine a correspondence between the two mesh shapes, and generate an interpolated path of vertices. The target of our method is a multiple joint model such as a human body, so we can create a human's loop, and naturally switch to a random action at any time. So we can apply this to the creation of interactive media, for example, an object changes when the watcher "touches" it in the virtual immersive space.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00051926/

  • Using the Sequencing of 3D-Video to Create a Movie with an Arbitrary Viewpoint that can be Played Indefinitely

    HATTORI YUICHI, NAKAZAWA ATSUSHI, TAKEMURA HARUO

    IEICE technical report   107 ( 427 )   41 - 48   2008.1

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    3D video with an arbitrary viewpoint, generated by 3D reconstruction of images captured by multiple cameras, is currently actively researched as a new type of media. In our research, we propose a method to connect different 3D video clips in an attempt to realize interactive 3D video. Conversion of time sequence 3D mesh shapes into skeleton data enables us to find frames that can connect each clip, determine a correspondence between the two mesh shapes, and generate an interpolated path of vertices. The target of our method is a multiple joint model such as a human body, so we can create a human's loop, and naturally switch to a random action at any time. So we can apply this to the creation of interactive media, for example, an object changes when the watcher "touches" it in the virtual immersive space.

    CiNii Article

    CiNii Books

    researchmap

  • Optimized rendering for a three-dimensional videoconferencing system Reviewed

    Rachel Chu, Susumu Date, Seiki Kuwabara, Atsushi Nakazawa, Haruo Takemura, Daniel Tenedorio, Jürgen P. Schulze, Fang-Pang Lin

    Proceedings - 4th IEEE International Conference on eScience, eScience 2008   540 - 546   2008

     More details

    Language:English  

    Industry widely employs the two-dimensional videoconferencing system as a long distance communication tool, but current limitations such as its tendency to misrepresent eye contact prevent it from becoming more widely adopted. We are exploring the possibility of a three-dimensional videoconferencing system for future interactive streaming of point cloud data, and present the preliminary research results in this paper. We have tested thus far with one sender and one receiver, using pre-recorded data for the sender. The sender, encircled by high-definition cameras, stands and speaks in a room. A cluster of computers reconstructs each frame of the camera images into a 3D point cloud and streams it across a high-speed, low-latency network. On the receiving end, a splat-based Tenderer employs a new algorithm to efficiently resample the points in real-time, maintaining a user-specified frame rate. Parallel hardware projects onto multiple screens while head tracking equipment records the viewer's movements, allowing the receiver to view a stereoscopic 3D representation of the sender from multiple angles. We can combine these visuals with appropriate use of multiple audio channels to forge an unparalleled virtual experience. This next step towards immersive 3D videoconferencing brings us closer to empowering worldwide collaboration between research departments. © 2008 IEEE.

    DOI: 10.1109/eScience.2008.42

    Scopus

    researchmap

  • Next-generation Teaching and Learning Platform for Higher Educational Institutions(<Special Issue> Application and Aggregation of Learning Objects and Learning Data)

    KAJITA Shoji, KAKUSHO Koh, NAKAZAWA Atsushi, TAKEMURA Haruo, MINOH Michihiko, MASE Kenji

    Japan journal of educational technology   31 ( 3 )   297 - 305   2007.12

     More details

    Language:Japanese   Publisher:Japan Society for Educational Technology  

    This paper describes Course Management Systems (CMSs) from the viewpoint of Open Source and Open Standard in terms of Software Architecture. CMS is a key information infrastructure for supporting teaching and learning in higher educational institutions. The effective use of CMS requires the integration with Student Information Systems (SIS), e-Library and other information infrastructures, along with strategic planning and sustainable development. In this paper, we also introduce our latest activities to develop next-generation CMS since 2004.

    DOI: 10.15077/jjet.KJ00004964294

    CiNii Article

    CiNii Books

    researchmap

  • 高等教育機関における次世代教育学習 支援プラットフォームの構築に向けて Reviewed

    梶田 将司, 角所 考, 中澤 篤志, 竹村 治雄, 美濃 導彦, 間瀬 健二

    日本教育工学会論文誌   31 ( 3 )   36-45   2007.12

     More details

    Language:Japanese   Publishing type:Rapid communication, short report, research note, etc. (scientific journal)  

    researchmap

  • Real-Time Space Carving Using Graphics Hardware(<Special Section>Image Recognition and Understanding)

    NITSCHKE Christian, NAKAZAWA Atsushi, TAKEMURA Haruo

    IEICE transactions on information and systems   90 ( 8 )   1175 - 1184   2007.8

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    Reconstruction of real-world scenes from a set of multiple images is a topic in computer vision and 3D computer graphics with many interesting applications. Attempts have been made to real-time reconstruction on PC cluster systems. While these provide enough performance, they are expensive and less flexible. Approaches that use a GPU hardware-acceleration on single workstations achieve real-time framerates for novel-view synthesis, but do not provide an explicit volumetric representation. This work shows our efforts in developing a GPU hardware-accelerated framework for providing a photo-consistent reconstruction of a dynamic 3D scene. High performance is achieved by employing a shape from silhouette technique in advance. Since the entire processing is done on a single PC, the framework can be applied in mobile environments, enabling a wide range of further applications. We explain our approach using programmable vertex and fragment processors and compare it to highly optimized CPU implementations. We show that the new approach can outperform the latter by more than one magnitude and give an outlook for interesting future enhancements.

    DOI: 10.1093/ietisy/e90-d.8.1175

    CiNii Article

    researchmap

  • Automatic Synthesis of Dance Performance Using Motion and Musical Features

    SHIRATORI Takaaki, NAKAZAWA Atsushi, IKEUCHI Katsushi

    The IEICE transactions on information and systems   90 ( 8 )   2242 - 2252   2007.8

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • Refinement of the Shape Reconstructed by Visual Cone Intersection using Fitting the Standard Human Model

    HATTORI YUICHI, NAKAZAWA ATSUSHI, TAKEMURA HARUO

    IPSJ SIG Notes. CVIM   2007 ( 31 )   147 - 154   2007.3

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    3D human model reconstruction using visual cone intersection between images captured by multiple cameras has been thoroughly researched, but the accuracy of the shape estimation by previous methods is limited due to the use of silhouette images. Increasing the number of cameras can help improve accuracy, but the problems of unsmooth surfaces appearing in the shape data and the disappearance of the surface normal will still remain. In this research, we try to refine this process by reshaping the reconstructed model. First, we estimate a subject's pose from captured human shape data and create a parts-fitted model using standard human model parts prepared in advance. Afterwards, we reshape the parts-fitted model based on the surface normal vector. By doing this, we can obtain an accurate and refined 3D human model that maintains the subject's pose and characteristics of his body.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00052069/

  • Multilinear analysis for task recognition and person identification Reviewed

    Manoj Perera, Takaaki Shiratori, Shunsuke Kudoh, Atsushi Nakazawa, Katsushi Ikeuchi

    2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9   1415 - +   2007

     More details

    Language:English   Publisher:IEEE  

    This paper introduces a Multi Factor Tensor(MFT) model to recognize motion styles and person identities in dance sequences. We apply a musical information analysis method in segmenting the motion sequence relevant to the key poses and the musical rhythm. We define a task model considering the repeated motion segments, where the motion is decomposed into person invariant factor task and person dependant factor style. We capture the motion data of different people for a few cycles, segment it using the musical analysis approach, normalize the segments using a vectorization method, and realize our MYT model. The experiments are conducted according to two approaches. Various experiments that we conduct to evaluate the potential of the recognition ability of our proposed approaches and the results demonstrate the high accuracy of our model. The recognition results and the motion decomposition with be used in further extending the motion generation process in various styles and for different tasks.

    DOI: 10.1109/IROS.2007.4399293

    Web of Science

    researchmap

  • ボリュームデータの細線化とグラフマッチングを用いた事例ベース人体姿勢推定

    田中 秀典, 中澤 篤志, 竹村 治雄

    研究会講演予稿   229   57 - 64   2006.11

     More details

    Language:Japanese   Publisher:画像電子学会  

    CiNii Article

    CiNii Books

    researchmap

  • Compression of Time Seaquence Volume Data using 3D Hu Invariant Moments

    HATTORI Yuichi, NAKAZAWA Atsushi, MACHIDA Takashi, TAKEMURA Haruo

    IPSJ SIG Notes. CVIM   2006 ( 51 )   145 - 150   2006.5

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    Time sequence 3D data has a broad range of application and is a research area expected for the future development. The time sequence 3D data has such a huge data size that data compression is essential when it is transmitted and saved. This paper proposes a compression method by using the time sequence block-matching with extended 3D Hu invariant moment for the purpose of reduction of the data time redundancy. In this method, the data compression ratio is improved by detecting the rotational motion of an object that is not considered in the existing method.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00052187/

  • Example Based Approach for Human Pose Estimation using Volume Data and Graph Matching

    TANAKA Hidenori, NAKAZAWA Atsushi, MACHIDA Takashi, TAKEMURA Haruo

    IPSJ SIG Notes. CVIM   2006 ( 51 )   131 - 136   2006.5

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    In this paper, we propose a novel marker free motion capture method by using volume data. A volume data is reconstructed from multiple camera views at each frame through visual hull based method. Then, the thinness process is performed to idenfity the structure of the volume. Here, we can get the model skeleton graph in which a body and limbs are expressed as the nodes, and links expresses the connectivity between them. We compare the acquired graph and the graphs in the Model Graph Database (MGDB) and find the most similar one. The MGDB contains the example graphs which express the human body postures. Because the nodes of the MGDB graph are labeled according to the body portions, we can know the portions of input graph (skeleton) from the graph matching result. Finally we fit the skeleton and human body portions' model by using the identification results of the body portions. The experiment result shows the validity our approach.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00052185/

  • D-12-69 Volume Based Motion Capture Considering Topological Difference

    Tanaka Hidenori, Nakazawa Atsushi, Machida Takashi, Takemura Haruo

    Proceedings of the IEICE General Conference   2006 ( 2 )   201 - 201   2006.3

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • D-12-46 A Compression Method of Time Sequence Volume Data using 3D Hu Invariant Moments

    Hattori Yuichi, Nakazawa Atsushi, Machida Takashi, Takemura Haruo

    Proceedings of the IEICE General Conference   2006 ( 2 )   178 - 178   2006.3

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • 人の動きのデジタル化とその応用--モーションキャプチャと音楽情報を利用した舞踊動作の解析と生成

    白鳥 貴亮, 池内 克史, 中澤 篤志

    画像ラボ   17 ( 3 )   1 - 5   2006.3

     More details

    Language:Japanese   Publisher:日本工業出版  

    CiNii Article

    CiNii Books

    researchmap

  • Task recognition and style analysis in dance sequences Reviewed

    Manoj Perera, Takaaki Shiratori, Shunsuke Kudoh, Atsushi Nakazawa, Katsushi Ikeuchi

    IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems   329 - 334   2006

     More details

    Language:English  

    In this paper we present a novel approach to recognizing motion styles and identifying persons using the Multi Factor Tensor (MFT) model. We apply a musical information analysis method in segmenting the motion sequence relevant to the key poses and the musical rhythm. We define a task model considering the repetitive motion segment, in which motion is decomposed into task and style. Given the motion data set, we formulate the MFT model and factorize it efficiently in recognizing the tasks, the styles, and the identities of persons. In our experiments, traditional dance by several people is chosen as the motion sequence. We capture the motion data for a few cycles, segment it using the musical analysis approach, normalize the segments, and recognize a task model from them. Various experiments to evaluate the potential of the recognition ability of our proposed approach are performed, and the results demonstrate the high accuracy of our model. © 2006 IEEE.

    DOI: 10.1109/MFI.2006.265645

    Scopus

    researchmap

  • 観察学習パラダイムに基づく二足歩行ヒューマノイドロボットによる舞踊動作の再現 (特集:身体と運動の進化)

    池内 克史, 中澤 篤志, 工藤 俊亮

    バイオメカニクス研究   10 ( 3 )   190 - 202   2006

     More details

    Language:Japanese   Publisher:日本バイオメカニクス学会  

    CiNii Article

    CiNii Books

    researchmap

  • Synthesizing dance performance using musical and motion features Reviewed

    Takaaki Shiratori, Atsushi Nakazawa, Katsushi Ikeuchi

    2006 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-10   2006   3654 - +   2006

     More details

    Language:English   Publisher:IEEE  

    This paper proposes a method for synthesizing dance performance synchronized to played music and our method presents a system that imitates dancers' skills in performing their motion while they listen to the music. Our method consists of a motion analysis, a music analysis, and a motion synthesis based on results of the analyses. In these analysis steps, motion and music features are acquired. These features are derived from motion keyframes, motion intensity, music intensity, musical beats, and chord changes. Our system also constructs a motion graph to search similar poses from given dance sequences and to connect them as possible transitions. In the synthesis step, the trajectory that provides the best correlation between music and motion features is selected from the motion graph, and the resulting motion is generated. Our experimental results indicate that our proposed method actually creates dance as the system "hears" the music.

    DOI: 10.1109/ROBOT.2006.1642260

    Web of Science

    researchmap

  • Parallel Simultaneous Alignment of a Large Number of Range Images on Distributed Memory System(Image Processing)

    OISHI TAKESHI, SAGAWA RYUSUKE, NAKAZAWA ATSUSHI, KURAZUME RYO, IKEUCHI KATSUSHI

    IPSJ Journal   46 ( 9 )   2369 - 2378   2005.9

     More details

    Language:Japanese   Publisher:一般社団法人情報処理学会  

    This paper describes a method for parallel alignment of multiple range images. Since it is difficult to align a large number of range images simultaneously, we developed a parallel method to accelerate and reduce the memory requirement of the process. Although a general simultaneous alignment algorithm searches correspondences for all pairs of all range images, by rejecting redundant dependencies, our method makes it possible to accelerate computation time and reduce the amount of memory used on each node. The correspondence search is performed independently for each pair of range images. A...

    CiNii Article

    researchmap

  • Parallel Simultaneous Alignment of a Large Number of Range Images on Distributed Memory System

    OISHI TAKESHI, SAGAWA RYUSUKE, NAKAZAWA ATSUSHI, KURAZUME RYO, IKEUCHI KATSUSHI

    IPSJ journal   46 ( 9 )   2369 - 2378   2005.9

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    This paper describes a method for parallel alignment of multiple range images. Since it is difficult to align a large number of range images simultaneously, we developed a parallel method to accelerate and reduce the memory requirement of the process. Although a general simultaneous alignment algorithm searches correspondences for all pairs of all range images, by rejecting redundant dependencies, our method makes it possible to accelerate computation time and reduce the amount of memory used on each node. The correspondence search is performed independently for each pair of range images. Accordingly, the computations between the pairs are preformed in parallel on multiple processors. All relations between range images are described as a pair node hyper-graph. Then, an optimal pair assignment is computed by partitioning the graph properly. The method was tested on a 16 processor PC cluster, where it demonstrated the high extendibility and the performance improvement in time and memory.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00010541/

  • The Structure Analysis of Dance Motions Using Motion Capture and Musical Information

    SHIRATORI Takaaki, NAKAZAWA Atsushi, IKEUCHI Katsushi

    The transactions of the Institute of Electronics, Information and Communication Engineers. D-II   88 ( 8 )   1662 - 1671   2005.8

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    CiNii Article

    CiNii Books

    researchmap

  • Extraction of Natural Quadric Surfaces from Range Image Using Hough Transform and EM Algorithm

    YAMAMOTO Kokushi, NAKAZAWA Atsushi, KIYOKAWA Kiyoshi, TAKEMURA Haruo

    IEICE technical report. Natural language understanding and models of communication   104 ( 667 )   55 - 60   2005.2

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    In this paper, we present a novel method to estimate several natural quadric (sphere, cylinder, and cone) and plane surface parameters from a range image. First, model surface parameters are estimated using Hough transform technique. Based on these estimated parameters, more reliable parameters are calculated using the EM algorithm. We implemented this method and applied to actual range data.

    CiNii Article

    CiNii Books

    researchmap

  • 観察に基づく音楽およびモーションキャプチャデータからの舞踊動作生成手法

    中澤篤志

    第22回画像の認識 理解シンポジウム (MIRU2005), August   2005

  • Fast simultaneous alignment of multiple range images using index images Reviewed

    T Oishi, A Nakazawa, R Kurazume, K Ikeuchi

    FIFTH INTERNATIONAL CONFERENCE ON 3-D DIGITAL IMAGING AND MODELING, PROCEEDINGS   476 - 483   2005

     More details

    Language:English   Publisher:IEEE COMPUTER SOC  

    This paper describes a fast and easy-to-use simultaneous alignment method of multiple range images. The most time consuming part of alignment process is searching corresponding points. Although "Inverse calibration" method quickly searches corresponding points in complexity O(n), where n is the number of vertices, the method requires some look-up tables or precise sensor parameters. Then, we propose an easy-to-use method that uses "Index Image": "Index image" can be rapidly created using graphics hardware without precise sensor parameters. For fast computation of rigid transformation matrices of a large number of range images, we utilized linearized error function and applied incomplete Cholesky conjugate gradient (ICCG) method for solving linear equations. Some experimental results that aligned a large number of range images measured with laser range sensors show the effectiveness of our method.

    DOI: 10.1109/3DIM.2005.41

    Web of Science

    researchmap

  • Task model of lower body motion for a biped humanoid robot to imitate human dances Reviewed

    S Nakaoka, A Nakazawa, F Kanehiro, K Kaneko, M Morisawa, K Ikeuchi

    2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vols 1-4   2769 - 2774   2005

     More details

    Language:English   Publisher:IEEE  

    The goal of this study is developing a biped humanoid robot that can observe a human dance performance and imitate it. To achieve this goal, we propose a task model of lower body motion, which consists of task primitives (what to do) and skill parameters (how to do it). Based on this model, a sequence of task primitives and their skill parameters are detected from human motion, and robot motion is regenerated from the detected result under constraints of a robot. This model can generate human-like lower body motion including various waist motions as well as various stepping motions of the legs. Generated motions can be performed stably oil an actual robot supported by its own legs. We used improved robot hardware HRP-2, which has superior features in body weight, actuators, and DOF of the waist. By using the proposed method and HRP-2, we have realized a dance performance of Japanese folk dance by the robot, which is synchronized with a performance of a human grand master on the same stage.

    DOI: 10.1109/IROS.2005.1545395

    Web of Science

    researchmap

  • 語学教育教 材を利用した大学合同による実証実験 Reviewed

    間瀬健二, 梶田将司, Seiie Jang, 上田真由美, 杉浦達樹, 佐々木順子, 美濃導彦, 壇辻正剛, 中村 裕一, 角所考, 元木環, 正司哲朗, 竹村治雄, 中澤篤志, 浦真吾, 鐘ヶ江力, 岩澤亮祐

    文部科学省研究委託事業『知的資産の電子的な保存・活用を 支援するソフトウェア技術基盤の構築』平成17 年度研究概要, pp. 27–34, 2005-12.   2005

     More details

    Language:Japanese  

    researchmap

  • 大阪大学サイバーメディアセンターにおける新情報教育システム

    桝田秀夫, 小川剛史, 中澤篤志, 町田貴史, 清川清, 竹村治雄

    PC Conference論文集   2005 (Web)   2005

  • ユビキタス環境下での次世代コース管理システム

    梶田 将司, 中澤 篤志, 角所 考

    名古屋大学情報連携基盤センターニュース   3 ( 4 )   271 - 276   2004.11

     More details

    Language:Japanese   Publisher:名古屋大学  

    CiNii Article

    CiNii Books

    researchmap

  • Structure Analysis of Human Motion and Motion Imitation of Humanoids -Dancing Humanoid Project-

    NAKAZAWA Atsushi, NAKAOKA Shinichiro, SHIRATORI Takaaki, KUDOH Shunsuke, IKEUCHI Katsushi

    IPSJ SIG Notes. CVIM   2004 ( 113 )   31 - 39   2004.11

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    The interaction of robots is the reaction to their outer world according to the outer signals made by humans or environments. In particular, the humanoids which do the cooperative tasks with humans have to be considered how smoothly and naturally communicate with humans. This is achieved by the technologies of the natural motion synthesis or understanding of outer signals. In this paper, we introduce our project which aims to develop the methods to import human motion into humanoids, in particular, human dance motion. Through our research, we concentrated that how to express the 'naturality' of human and synchronization method to the important signals of dances: music signals.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00052434/

  • 民俗芸能のデジタルアーカイブとロボットによる動作提示

    池内 克史, 中澤 篤志, 小川原 光一, 高松 淳, 工藤 俊亮, 中岡 慎一郎, 白鳥 貴亮

    日本バーチャルリアリティ学会誌 = Journal of the Virtual Reality Society of Japan   9 ( 2 )   78 - 84   2004.6

     More details

  • Modeling Indoor Scene by Determining its Reflection Parameters

    HARADA TAKAAKI, HARA KENJI, NAKAZAWA ATSUSHI, SAITO HIROAKI, IKEUCHI KATSUSHI

    IPSJ SIG Notes. CVIM   2004 ( 6 )   45 - 52   2004.1

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    To realistically model indoor scene is crucial and challenging problem. Indoor scene has a large number of three-dimensional range data and spatial restriction in two-dimensional photographing, as well as inevitable specularities in certain parts. In this paper we intend to deal with such a problem. Principally, our method use three-dimensional (3D) geometrical data and ordinary two-dimensional (2D) color images. The 3D geometrical data is created using parallel alignment and merging of a large number of range data. We texture the 3D data with 2D color images using simultaneous registering technique. Then, we estimate the reflection parameters of the diffuse and specular reflection components from a single image where specularities occur. Finally, using all acquired and estimated data, we can generate synthetic images of real objects realistically.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00052531/

  • Simultaneous Alignment of a Large Number of Range Images

    大石岳史, 佐川立昌, 中沢篤志, 倉爪亮, 池内克史

    3D映像   18 ( 4 )   2004

  • Matching and blending human motions using temporal scaleable dynamic programming Reviewed

    A. Nakazawa, S. Nakaoka, K. Ikeuchi

    2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)   1   287 - 294   2004

     More details

    Language:English  

    researchmap

  • Detecting dance motion structure through music analysis Reviewed

    T Shiratori, A Nakazawa, K Ikeuchi

    SIXTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, PROCEEDINGS   857 - 862   2004

     More details

    Language:English   Publisher:IEEE COMPUTER SOC  

    In these days, many important intangible culturalproperties of the world are being lost because of the lack of successive performers. Digital archiving technology is one of the effective solutions for this issue, and we have started our digital archiving project of cultural properties including these intangible ones. For these human motion archives, the method of automatic motion structure analysis is vital for a variety of purposes. We believe that the dance motion consists of "primitive motions" and that motion analysis is necessary to detect these components. Particularly for dance motions, we think these primitives must be synchronized to the musical rhythm. In this paper, we introduce musical information for motion structure analysis. This method automatically detects the musical rhythm and segments the original motion, and classifies them as to the primitive motions. The experimental results confirm that our motion analysis yielded the primitive motions in accordance to the musical rhythm.

    DOI: 10.1109/AFGR.2004.1301641

    Web of Science

    researchmap

  • Representing cultural heritage in digital forms for VR systems through computer vision techniques Reviewed

    K Ikeuchi, A Nakazawa, K Hasegawa, T Ohishi

    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 4   4   1 - 6   2004

     More details

    Language:English   Publisher:IEEE COMPUTER SOC  

    This paper overviews our research on digital preservation of cultural assets and digital restoration of their original appearance. Geometric models are digitally achieved through a pipeline consisting of scanning, registering and merging multiple range images. We have developed a robust simultaneous registration method and an efficient and robust voxel-based integration method. On the geometric models created, we have to align texture images acquired from a color camera. We have developed a texture mapping method to utilize laser reflectance. In an attempt to restore the original appearance of historical heritage objects, we have synthesized several buildings and statues using scanned data and literature survey with advice from experts.

    Web of Science

    researchmap

  • Symbolic Motion Description for a Dancing Humanoid Robot

    NAKAOKA Shinichiro, NAKAZAWA Atsushi, YOKOI Kazuhito, IKEUCHI Katushi

    Technical report of IEICE. PRMU   103 ( 390 )   55 - 60   2003.10

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    We have developed a system for humanoid robots to imitate whole body motions performed by a human. Our target motions are traditional folk dances. Human motions can be acquired by a motion capturing system. However, those motion data cannot be imported directly into a robot which have body properties different from that of humans. In order to realize the motion which satisfies the essence of the original dance motion under the robot constraints, we propose a motion recognition and generation method using symbolic motion descriptions. This method have realized an actual performance of "Tsugaru Jongara-Bushi", which is one of the Japanese traditional folk dances, on the humanoid robot "HRP-1S".

    CiNii Article

    CiNii Books

    researchmap

  • Estimating Illumination Position, Color and Surface Reflectance Properties from a Single Image

    HARA KENJI, RobbyT.Tan, NISHINO KO, NAKAZAWA ATSUSHI, IKEUCHI KATSUSHI

    44 ( 9 )   94 - 104   2003.7

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    In this paper we propose a new method for estimating a position and color of a light source, as well as reflectance properties of a real object's surface, from a single image. We use the intensity of the diffuse and specular component for estimating the light source position, while the color distribution of the specular region is for estimating the light source color. The flow of this method is basically as follows: first, an initial position of the light source is estimated from a peak location of the specular region and a rough intensity value of the diffuse region. This diffuse-to-specular intensity value is also used to determine the initial values of the object reflectance properties. After having the initial values, using iterative fitting method, the light position and reflectance properties are estimated simultaneously. Finally, the estimation process of the light source color is based on the color distribution of the specular region. By knowing the light source position, color and the object reflectance properties, we can freely generate synthetic images under arbitrary light source conditions.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00018076/

  • Retrieving and applying human dance motion styles from several motion data

    NAKAZAWA Atsushi, NAKAOKA Shinichiro, IKEUCHI Katsushi

    IPSJ SIG Notes. CVIM   2003 ( 36 )   101 - 107   2003.3

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    This paper presents, the method to analyze human motion for the purpose of digital archive of intangible cultural heritages, such as folk dances. In the recent studies, the whole motion sequence is segmented and directly use for computer animation or motion analysis. We proposes the idea that the human dance motion consists of 'Basic Motion' and 'Motion Styles'. The Basic Motion is common motion for any dancers, and Motion Styles represents the uniqueness of the individual dancers. In the experiment, we confirmed proposed method works effectively through different motion data of male and femail dancers.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00052645/

  • Parallel Alignment of a Large Number of Range Images on PC Cluster

    OISHI Takeshi, SAGAWA Ryusuke, NAKAZAWA Atsushi, KURAZUME Ryo, IKEUCHI Katsushi

    IPSJ SIG Notes. CVIM   137 ( 36 )   27 - 34   2003.3

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    This paper describes a parallel alignment method of multiple range images. It is difficult to align a large number of range images simultaneously. Therefore, we developed the parallel method to improve the time performance ana memory performance. The method was tested on 16 processor PC cluster and shown the high extendibility and the performance improvement in time and memory.

    CiNii Article

    CiNii Books

    researchmap

  • Geometric and Photometric Merging for Large-Scale Objects

    SAGAWA Ryusuke, MASUDA Tomohito, OISHI Takeshi, NISHINO Ko, NAKAZAWA Atsushi, KURAZUME Ryo, IKEUCHI Katsushi

    IPSJ SIG Notes. CVIM   137 ( 36 )   9 - 18   2003.3

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    In this paper, we consider the geometric and photometric modeling of large-scale and intricately shaped objects, such as cultural heritage objects. When modeling such objects, new issues occurred during the modeling steps which had not been considered in the previous research conducted on the modeling of small, indoor objects. First, when modeling a large-scale and intricately shaped object, a huge amount of data is required to model the object. We would like to propose two approaches to handling this amount of data. We propose a novel method for searching the nearest neighbor by extending the search algorithm using a k-d tree. We constructed a merged model in adaptive resolution according to the geometric and photometric attributes of range images for efficient use of computational resources. Second, we reconstructed a 3D model with an appearance which successfully discarded outliers due to specular reflection, by taking a consensus of the appearance changes of the target object from multiple range images. The photometric attribute of the model can be used for aligning with 2D color images. The third issue is complementing unobservable surfaces of an object. We propose a novel method to complement such holes or gaps in the surfaces, areas which are not observed by any scans. In this novel method, the surface of an object is scanned using various kinds of sensors. We efficiently complement them by taking a consensus of the signed distances of neighbor voxels.

    CiNii Article

    CiNii Books

    researchmap

  • Recognition and Generation of Leg Motion for Dance Imitation by a Humanoid Robot

    NAKAOKA Shinichiro, NAKAZAWA Atsushi, YOKOI Kazuhito, IKEUCHI Katsushi

    IPSJ SIG Notes. CVIM   2003 ( 36 )   93 - 100   2003.3

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    This study attempts to realize a humanoid robot which can imitate human dances. Human motion can be acquired by a motion capturing system. However, the captured data cannot be directly imported to a robot because of difference of body structure between a human and the robot. Since leg motion must consider balance keeping, constraints of it is too restricted to apply the captured data. This paper proposes a method to recognize human motion and generate robot motion from the recognition result. Primitive motions which is required to express leg motion in dances are defined. A sequence of primitives is extracted from captured motion data. Leg motion of a robot is generated from the primitive sequence to satisfy the leg constraints and the dynamic balance. Generated motion is tested on OpenHRP dynamics simulator. We certified validity of this method.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00052644/

  • Generating whole body motions for a biped humanoid robot from captured human dances Reviewed

    S Nakaoka, A Nakazawa, K Yokoi, H Hirukawa, K Ikeuchi

    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS   3   3905 - 3910   2003

     More details

    Language:English   Publisher:IEEE  

    The goal of this study is a system for a robot to imitate human dances. This paper describes the process to generate whole body motions which can be performed by an actual biped humanoid robot. Human dance motions are acquired through a motion capturing system. We then extract symbolic representation which is made up of primitive motions: essential postures in arm motions and step primitives in leg motions. A joint angle sequence of the robot is generated according to these primitive motions. Then joint angles are modified to satisfy mechanical constraints of the robot. For balance control, the waist trajectory is moved to acquire dynamics consistency based on desired ZMP. The generated motion is tested on OpenHRP dynamics simulator. In our test, the Japanese folk dance, 'Jongara-bushi' was successfully performed by HPP-1S.

    Web of Science

    researchmap

  • Modeling from Reality - Creating virtual reality models through observation Reviewed

    K Ikeuchi, A Nakazawa, K Nishino, R Sagawa, T Oishi, H Unten

    VIDEOMETRICS VII   5013   117 - 128   2003

     More details

    Language:English   Publisher:SPIE-INT SOC OPTICAL ENGINEERING  

    In this paper, we present an overview of our project to construct a digital archive of cultural heritages. Among the efforts in our project, we briefly overview our research on geometric and photometric preservation of cultural assets and restoration of their original appearance. Digital geometric modeling is achieved through a pipeline consisting of scanning, registering and merging multiple range images. For these purposes, we have developed a robust simultaneous registration method and an efficient and robust voxel based integration method. On top of the geometrical model, we align texture images acquired at the scanning. Because the geometrical relation between the range sensor and the image sensor are calibrated, we automatically align texture images onto the geometrical models. For photometric modeling, we have developed a surface light field based method, which captures the appearance variation of real world objects under different viewpoints and illumination conditions from a series of images. As an attempt to restore the original appearance of historical heritages, we have reconstructed several buildings and statues that have been lost in the past. In this paper, we overview these techniques and show several results of applying the proposed methods to existing ancestral assets.

    DOI: 10.1117/12.473086

    Web of Science

    researchmap

  • JST-1 Digital Archive of Cultural Heritage through Observation

    IKEUCHI Katsushi, NAKAZAWA Atsushi

    2002   1 - 2   2002.9

     More details

    Language:Japanese   Publisher:Forum on Information Technology  

    CiNii Article

    CiNii Books

    researchmap

  • Digital Archive of Cultural Heritage through Observation

    IKEUCHI Katsushi, NAKAZAWA Atsushi

    Journal of the Visualization Society of Japan   22   7 - 10   2002.7

     More details

  • 視覚による舞踊動作の保存 解析および生成

    中澤篤志

    画像の認識 理解シンポジウム2002   153   2002

  • Imitating human dance motions through motion structure analysis Reviewed

    A Nakazawa, S Nakaoka, K Ikeuchi, K Yokoi

    2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS   3   2539 - 2544   2002

     More details

    Language:English   Publisher:IEEE  

    This paper presents the method for importing human dance motion into humanoid robots through visual observation. The human motion data is acquired from a motion capture system consisting of 8 cameras and 8 PC clsters. Then the whole motion sequence is divided into some motion elements and clusterd into some groups according to the correlation of end-effectors' trajectories. We call these segments as 'motion primitives'. New dance motions are generated by concatenating these motion primitives. We are also trying to make a humanoid dance these original or generated motions using inverse-kinematics and dynamic balancing technique.

    Web of Science

    researchmap

  • Iterative Refinement of Range-Measurement Accuracy by Considering the Direction of Error of Range Images.

    大石岳史, 佐川立昌, 中沢篤志, 倉爪亮, 池内克史

    情報処理学会シンポジウム論文集   2002 ( 11,Pt.2 )   2002

  • Digital Archive of the Human Motion

    2001   25 - 32   2001.12

     More details

    Language:Japanese  

    CiNii Article

    researchmap

  • Human Positioning System Using Distributed Camera Agents

    NAKAZAWA Atsushi, KATO Hirokazu, INOKUCHI Seiji

    Transactions of Information Processing Society of Japan   41 ( 10 )   2895 - 2906   2000.10

     More details

    Language:Japanese   Publisher:Information Processing Society of Japan (IPSJ)  

    This paper proposes the "Distributed Camera System" that can detect human position in a wide area. This system is constructed of many "camera agents" and achieves a task due to cooperation. Camera agents consist of a camera and an image processor and a computer network connects them. They are placed in a real environment and their viewing areas are either overlapping or separated. Each camera agent makes plans using an environmental map and the viewing information of the agent. Using these plans, the system can continuously track a person across all the viewing areas of camera agents. In addition, this system is robust wiht respect to agent's failure. We tested this system in two aspects : the detected trajectories adn the system's robustness. In this paper, we present the results of these evaluations, confirming the efficiency of our system.

    CiNii Article

    CiNii Books

    researchmap

    Other Link: http://id.nii.ac.jp/1001/00012173/

  • 分散観測エージェントによる複数人物の追跡

    中澤篤志

    画像の認識理解シンポジウム(MIRU2000)予稿集   15 - 20   2000

  • 分散カメラシステムによる人物の追跡

    中澤篤志

    画像の認識 理解シンポジウム (MIRU'98)   1   1 - 6   1998

▼display all

Presentations

  • 優しい介護インタラクションの計算的・脳科学的解明 Invited

    中澤篤志, 倉爪 亮, 本田美和子, 療センター, 佐藤 弥, 石川翔吾, 吉川佐紀子, 伊藤美緒, 医療センター

    2019.7.31 

     More details

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

    researchmap

  • 人の目の観察により導き出されるもの ~ 注視・環境情報・介護スキルの推定 ~ Invited

    中澤 篤志

    電子情報通信学会ヒューマン情報処理研究会(HIP),日本光学会視覚研究グループ  2018.10.22 

     More details

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

    researchmap

  • 優しい介護」インタラクションの計算的・脳科学的解明 ~パターン認識は介護に何ができるのか?~ Invited

    中澤 篤志

    第17回情報科学技術フォーラム  2018.9.20 

     More details

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

    researchmap

  • 人の視覚情報の可視化と優しい介護技術学習への展開 Invited

    中澤 篤志

    破壊的イノベーションがもたらすデジタル社会研究会  2018.1.20 

     More details

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

    researchmap

  • The environmental light and your eye - retrieving your vision using computer vision Invited

    NAKAZAWA Atsushi

    CiNII Lunch Seminar  2017.10.13 

     More details

    Language:English   Presentation type:Oral presentation (invited, special)  

    researchmap

  • Corneal Imaging Technique -- foundations and applications Invited

    NAKAZAWA Atsushi

    NAIST Seminar  2017.8.3 

     More details

    Language:English   Presentation type:Oral presentation (invited, special)  

    researchmap

  • Non-Calibrated and Real-Time Human View Estimation Using a Mobile Corneal Imaging Camera International conference

    NAKAZAWA Atsushi

    2017 International Symposium Toward the Future of Advanced Research at Shizuoka University  2017.3.8 

     More details

    Language:English   Presentation type:Oral presentation (invited, special)  

    researchmap

  • 画像認識技術による感情の センシングと人の視覚把握手法 Invited

    中澤 篤志

    自動車技術会 第9回ヒューマンファクター部門委員会  2015.8.27 

     More details

    Language:Japanese   Presentation type:Public lecture, seminar, tutorial, course, or other speech  

    researchmap

  • Non-Calibrated and Real-Time Human View Estimation Using a Mobile Corneal Imaging Camera International conference

    Atsushi Nakazawa

    Japan-Korea Workshop on Information and Robot Technology for Daily Life Support  2015.5 

     More details

    Language:English   Presentation type:Oral presentation (general)  

    researchmap

  • 視覚操作タスクでの集中度と瞳孔径変化の定量的関係

    Yuta Tange, Asami Matsumoto, Atsushi Nakazawa, Toyoaki Nishida

    Meeting on Image Recognition and Understanding  2015 

     More details

    Language:Japanese  

    researchmap

  • A Mobile Corneal Imaging Camera for Estimation of Human's View

    Atsushi Nakazawa

    Meeting on Image Recognition and Understanding  2015 

     More details

    Language:English  

    researchmap

  • 一人称視点映像による実環境の記憶可能性推定

    Kento OIZUMI, Atsushi NAKAZAWA, Toyoaki Nishida

    Meeting on Image Recognition and Understanding  2015 

     More details

    Language:Japanese  

    researchmap

  • Detecton of Gaze Target Objects Using Active Markers

    Hiroaki Kato, Atsushi Nakazawa, Toyoaki Nishida

    IEICE Technical Report, MVE  2014 

     More details

    Language:Japanese  

    researchmap

  • Exact Motif Discovery of Length-Range Motifs

    Yasser Mohammad, Toyoaki Nishida, Atsushi Nakazawa

    The 6th Asian Conference on Intelligent Information and Database Systems (ACIIDS)  2014 

     More details

    Language:English  

    researchmap

  • The Corneal Imaging Technique ? its Foundations and Applications

    Atsushi Nakazawa

    JaGFoS Symposium  2014 

     More details

    Language:English  

    researchmap

  • 角膜イメージング法によるリモート注視点推定システム

    Yusuke OKINO, Kento OIZUMI, Hiroaki KATO, Atsushi NAKAZAWA, Toyoaki Nishida

    IEICE Technical Report, MVE  2014 

     More details

    Language:Japanese  

    researchmap

  • 角膜イメージング法の基礎理論と応用 注視点・周辺視検出からシーンの高解像度復元まで

    中澤 篤志

    Japan Society of Precision Engineering, Technical Committee on Instrial Application of Image Processing  2014 

     More details

    Language:Japanese  

    researchmap

  • Robust registration of eye reection and scene images using random resample consensus

    Atsushi Nakazawa, Christian Nitschke, Toyoaki Nishida

    Meeting on Image Recognition and Understanding  2014 

     More details

    Language:English  

    researchmap

  • Human Body-parts Tracking for Fine-grained Behavior Classi?cation

    Norimichi Ukita, Atsushi Nakazawa

    IEEE Workshop on Decoding Subtle Cues from Social Interaction in conjunction with ICCV 2013  2013 

     More details

    Language:English  

    researchmap

  • Corneal Reflection Analysis for Point of Gaze Estimation and Other Applications

    Atsushi Nakazawa, Christian Nitschke

    The 7th International Workshop on Robust Computer Vision  2013 

     More details

    Language:English  

    researchmap

  • Super-Resolution Scene Reconstruction from Corneal Reflections

    Christian Nitschke, Atsushi Nakazawa

    Meeting on Image Recognition and Understanding  2013 

     More details

    Language:English  

    researchmap

  • Arm Pose Copying for Humanoid Robots

    Yasser Mohammad, oyoaki Nishida, Atsushi Nakazawa

    IEEE International Conference on Robotics and Biomimetics  2013 

     More details

    Language:English  

    researchmap

  • Point of Gaze Estimation through Corneal Surface Reflection in an Active Illumination Environment

    Atsushi Nakazawa, Christian Nitschke

    Image sensing symposium  2013 

     More details

    Language:Japanese  

    researchmap

  • I see what you see: Point of Gaze Estimation from Corneal Images

    Christian Nitschke, Atsushi Nakazawa, Toyoaki Nishida

    Asian Conference on Computer Vision (ACPR2013)  2013 

     More details

    Language:English  

    researchmap

▼display all

Industrial property rights

  • Image registration device, image registration method, and image registration program

    Atsushi Nakazawa, Christian Nitschke

     More details

    Application no:特願15413629  Date applied:2017.6.15

    An image registration device includes a mapping section for deciding a first mapping for
    transforming the first image to an environmental map and a second mapping for
    transforming the second image to an environmental map, a corresponding point pair
    extractor for extracting a pair of corresponding points by detecting one point in the first image
    and the corresponding one point in the second image, a rotational mapping deriver for
    deriving a rotational mapping for registering an image of the first image in the environmental
    map and an image of the second image in the environmental map with each other, based on
    positions and local feature amounts of the points in the first and second images, and a
    registration section for registering the data of the first image with the data of the second
    image based on the first mapping, the rotational mapping, and the second mapping.

    researchmap

  • 注視点検出装置、注視点検出方法、個人パラメータ算出装置、個人パラメータ算出方法、プログラム、及びコンピュータ読み取り可能な記録媒体

    Atsushi Nakazawa, Christian Nitschke

     More details

    Applicant:科学技術振興機構

    Application no:特願2013-548525 

    Patent/Registration no:特許第5467303号 

    researchmap

  • 画像位置合わせ装置、画像位置合わせ方法、および、画像位置合わせプログラム

    Atsushi Nakazawa, Christian Nitschke

     More details

    Applicant:科学技術振興機構

    Application no:特願2014-150848 

    researchmap

Awards

  • 最優秀論文賞 (Internatilnal Conference of Virtual Systems and Multimedia 2004)

    2004  

     More details

  • Best Paper Award (Internatilnal Conference of Virtual Systems and Multimedia 2004)

    2004  

     More details

Research Projects

  • Evaluation of care skills using wearable sensors

    Grant number:17H01779  2017.04 - 2021.03

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research (B)

    Nakazawa Atsushi

      More details

    Grant amount:\13390000 ( Direct expense: \10300000 、 Indirect expense:\3090000 )

    In this project, we first studied about what is a better caregiving method for people with dementia, then, on top of the findings, developed a method to evaluate its effectiveness by quantifying (1) caregiving skills and (2) caregiving effectiveness in dementia care using wearable devices worn by caregivers and statistical data analysis technology. In this end, we showed that (1) the effect of reducing the physical and psychological burden of caregivers by applying good caregiving skills, (2) the index of good caregiving skills and its effect on caregivers, and (3) the development of a caregiving skill self-training system using wearable devices.

    researchmap

  • Detection of human internal state using pupillary response

    Grant number:16K12462  2016.04 - 2019.03

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Grant-in-Aid for Challenging Exploratory Research

    Nakazawa Atsushi

      More details

    Grant amount:\2860000 ( Direct expense: \2200000 、 Indirect expense:\660000 )

    In this study, we developed a method to estimate human task concentration using pupil size (diameter). Although it is well known that the pupil diameter changes depending on the intensity of incident light, it is also affected by changes in the internal state of the person. In other words, although it is theoretically possible to obtain the internal state of a person from pupil diameter change, it is not used as a sensing device in a real environment. Through this research, we designed Target Pointing task (hereinafter referred to as "TP task") with variable path width / path length constraint, and made it possible to change the degree of difficulty of the task. Moreover, this task enables to study how the pupil system of a person changes when the task difficulty level is changed variously. From here, we could obtain a model for the change in difficulty of the TP task and the change in pupil diameter, and derived a model for the change in pupil diameter and the TP task.

    researchmap

  • Realization of Corneal Feedback AR

    Grant number:15H02738  2015.04 - 2018.03

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research (B)

    Kiyokawa Kiyoshi

      More details

    Grant amount:\16250000 ( Direct expense: \12500000 、 Indirect expense:\3750000 )

    Targeting augmented reality (AR) using optical see-through head mounted displays (HMD), we worked on the research and development of a new AR concept "cornea feedback AR" that optimizes the user experience by corneal imaging. Specifically, we developed a method for automatically calibrating the eyeball center in the HMD coordinate system from corneal reflection images, and a method for accurately estimating the gaze direction with an error of about 1.7 degrees using corneal reflection images. We also developed a method to robustly and accurately match corneal images with images of a scene camera, and based on this, developed a system that automatically finds eye contacts. In addition, we developed a system that automatically diagnoses anomaly eye movement from gaze behaviors and a system that a user can replay a fast 3D motion at a slower speed slow around the area of point of gaze.

    researchmap

  • Foundation of Synthetic Evidential Study

    Grant number:15K12098  2015.04 - 2017.03

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Grant-in-Aid for Challenging Exploratory Research

    Nishida Toyoaki, NAKAZAWA Atsushi, OHMOTO Yoshimasa, MOHAMMAD Yasser, THOVUTTIKUL Sutasinee, ABE Masakazu, OOKAKI Takashi, KADA Junpei, HIGUCHI Osamu, NOGUCHI Takuma

      More details

    Grant amount:\3640000 ( Direct expense: \2800000 、 Indirect expense:\840000 )

    We addressed establishing the framework of synthetic evidential study which is a novel method of collaborative study on addressing social processes, mysteries in particular that range from fictions to science and history, aiming at a powerful computational method for helping people build, share and evolve the common ground for communication by combining role play game, agent play and in-situ discussions. We have obtained initial results including analysis of legacy storytelling, modeling of mental state of participants of werewolf game, analysis of deepened understanding through multi-layered interpretation, analysis of cultural dependence of service waiting behavior, prototyping of a synthetic evidential study support system. We have gained useful insights regarding the synthetic evidential study that we believe lays the foundation for future research development.

    researchmap

  • 隠蔽や人体形状変化に頑健な非装着型モーションキャプチャ

    Grant number:19700168  2007

    日本学術振興会  科学研究費助成事業  若手研究(B)

    中澤 篤志

      More details

    Grant amount:\2400000 ( Direct expense: \2400000 )

    researchmap

  • Course Management System for Ubiquitos Environments

    2004 - 2008

      More details

    Grant type:Competitive

    researchmap

  • ユビキタス環境下での高等教育機関向けコース管理システム

    2004 - 2008

      More details

    Grant type:Competitive

    researchmap

  • 伝統舞踊のモデル化とロボットによる再演

    2000 - 2004

    JST戦略的創造研究推進制度(研究チーム型) (戦略的基礎研究推進事業:CREST) 

      More details

    Grant type:Competitive

    researchmap

  • Dancing Humanoid: A New Presentation of Archived Human Motion

    2000 - 2004

    JST Basic Research Programs (Core Research for Evolutional Science and Technology :CREST) 

      More details

    Grant type:Competitive

    researchmap

  • 文化財のデジタル保存

    2000 - 2004

    JST戦略的創造研究推進制度(研究チーム型) (戦略的基礎研究推進事業:CREST) 

      More details

    Grant type:Competitive

    researchmap

  • Digital Archive of Cultural Heritage

    2000 - 2004

    JST Basic Research Programs (Core Research for Evolutional Science and Technology :CREST) 

      More details

    Grant type:Competitive

    researchmap

  • Computer Animation, 3d Digiatal Archive, Humanoid Robotics

      More details

    Grant type:Competitive

    researchmap

  • コンピュータアニメーション、3次元デジタルアーカイブ、ヒューマノイドロボット応用

      More details

    Grant type:Competitive

    researchmap

▼display all

 

Class subject in charge

  • Interface Design (2023academic year) Third semester  - 火1~2

  • Interface Design (2023academic year) Third semester  - 火1~2

  • Advanced Internship for Interdisciplinary Medical Sciences and Engineering (2023academic year) Year-round  - その他

  • Technical English for Interdisciplinary Medical Sciences and Engineering (2023academic year) Late  - その他

  • Research Works for Interdisciplinary Medical Sciences and Engineering (2023academic year) Year-round  - その他

  • Research Works for Interdisciplinary Medical Sciences and Engineering (2023academic year) Year-round  - その他

  • Advanced Interdisciplinary Medical Sciences and Engineering (2023academic year) Prophase  - その他

  • Introduction to Medical Devices and Materials (2023academic year) Prophase  - 水1~2

  • Basic Statics for Experimental Data Processing (2023academic year) Prophase  - 火1~2

  • Basic Physics (Electromagnetics) 1 (2023academic year) Third semester  - 金3~4

  • Basic Physics (Electromagnetics) 2 (2023academic year) Fourth semester  - 金3~4

  • Basic Physics (Electromagnetics) (2023academic year) 3rd and 4th semester  - 金3~4

  • Basic Physics 2(Electromagnetics and DC Electric Circuits) (2023academic year) 3rd and 4th semester  - 金3~4

  • Image Processing Computer Vision (2023academic year) Prophase  - その他

▼display all

 

Media Coverage

  • 拡張現実(AR)技術による認知症ケアコミュニケーション訓練の有効性を実証

    世界文化社  レクリエ  2023.11

     More details

Academic Activities

  • 電子情報通信学会MVE研究会

    Role(s):Planning, management, etc.

    幹事補佐  2022.4.1

     More details

    Type:Academic society, research group, etc. 

    researchmap