MULTI-PERSPECTIVE UNDERSTANDING COGNITIVE BEHAVIOUR GROUP
PREDICTING HUMAN BEHAVIOUR
Understanding the mind’s influence on what we perceive may help researchers model human behaviour to create better smart devices.
Research Keywords: Experimental Psychology, Cognitive Science, Data Mining
As computers become more powerful, nuanced modelling of human decisions is becoming possible. At the same time, new technologies are also bringing fresh tools and disciplines to researchers at Chiba University studying the hidden relationships between behaviour and phenomena, ranging from the physiological to the physical. “With the help of environmental engineers and using deep learning and virtual reality techniques we can expect big developments in the cognitive sciences,” says Professor Makoto Ichikawa, part a diverse group known as the Multiperspective Under- standing of Cognitive Behaviour (MUCB) Group.
Ichikawa and his colleagues have been ex- amining the reactions of individuals during a digitally created fire event in a virtual reality environment. While psychologists often use methods such as questionnaires to understand reactions and decisions, Ichikawa’s colleagues are also able to use virtual reality scenarios to study biological parameters – such as blood pressure, heart rate and pupil dilation – in real-time. At the same time, they can also record reactions related to changes in room temperature and humidity. Virtual reality, says Ichikawa, can reveal the effects of situational elements such as illumination, luminance, noise and space on physiological reactions, ranging from metabolism to circadian rhythm. These developments, he says, are not only exciting to psychologists, but will also pro- vide the basis for creating more believable digital worlds.
Ichikawa’s work is being complemented by mathematicians, such as findings published in 2016 by Chiba University computer sci- entists. In their paper, they proposed a new method for predicting pedestrian dynamics. Using a technique from geostatistics called ‘kriging’ the researchers were able to predict trajectories up to 10 steps ahead more than 80 per cent of the time. This, they say, could lead to exciting developments in navigation, group-behaviour analysis and abnormal-be- haviour detection.
The common fate influence
To accurately model human behaviour, it’s also important to also understand the mind’s effect on perception. For example, while sitting in a stationary car, some may have felt like they have moved backwards if a neighbouring vehicle creeps forward. This sense of ‘induced motion’ is an optical illu- sion created in the mind.
In a recent paper published in the journal Perception, Ichikawa and his colleagues examined why one part of a visual illusion (concentric circles) appeared to be moving with or against another part of the illusion. They found that whether the parts of the image appeared to moved together or in opposition is linked to a person’s thoughts on the connection between the parts.
In some configurations, the two parts ap- peared to be moving in opposite directions. As in the car example, if one object appears to be accelerating away from another, the stationary or slower parts may seem to be moving in the opposite direction.
However, a recognition of a common fate, which is an appreciation for the linked nature of the parts, may encourage partic- ipants to see the elements move together. The researchers suggest this stems from a principle from the German Gestalt school of psychology that seeks to explain how our brains create order in the world.The common fate principle suggests that because logical patterns tend to take prece- dence over individual elements in how we see the world, a flock of birds, for example, may seem to be moving as one.
All of these findings, Ichikawa says, will have important ramifications for how we display motion, and group and individual dynamics, on digital devices, in cinematog- raphy and in virtual reality.（From CHIBA RESEARCH 2019）
|Name||Title, Affiliation||Research Themes|
|ICHIKAWA Makoto||Professor, Graduate School of Humanities and Studies on Public Affairs||Cognitive Psychology|
|Name||Title, Affiliation||Research Themes|
|USHITANI Tomokazu||Associate Professor, Graduate School of Humanities and Studies on Public Affairs||Comparative Cognition|
|KIMURA Eiji||Professor, Graduate School of Humanities and Studies on Public Affairs||Perceptual Psycology|
|TOKUNAGA Rumi||Assistant Professor, College of Liberal Arts and Sciences||Visual Information Processing, Color Dynamics|
|TANAKA Midori||Assistant Professor, College of Liberal Arts and Sciences||Imaging Science|
|DEN Yasuharu||Professor, Graduate School of Humanities and Studies on Public Affairs||Corpus Linguistics, Interaction Analysis|
|MATSUKA Toshihiko||Professor, Graduate School of Humanities and Studies on Public Affairs||Cognitive Modeling|
|KAWAMOTO Kazuhiko||Professor, Graduate School of Engineering||Behavior Analysis, Machine Learning|
|MIZOKAMI Yoko||Professor, Graduate School of Engineering||Visual Information Processing|
|YATA Noriko||Assistant Professor, Graduate School of Engineering||Evolutionary Computation Neural Network|
|ABE Akinori||Professor, Graduate School of Humanities and Studies on Public Affairs||Artificial Intelligence|
|ARAI Sachiyo||Professor, Graduate School of Engineering||Artificial Intelligence|
|SEKIYA Hiroo||Professor, Graduate School of Engineering||Wireless Communication Technology|
|MANABE Yoshitsugu||Professor, Graduate School of Engineering||Mixed Reality Image Instrumentation|
|HORIUCHI Takahiko||Professor, Graduate School of Engineering||Color Dynamics, Pattern Recognition|
|HIRAI Keita||Associate Professor, Graduate School of Engineering||Color Information Processing|
|KOMURO Nobuyoshi||Associate Professor, IMIT||Sensor Network|
|IMAIZUMI Shoko||Associate Professor, Graduate School of Engineering||Information Security|
|SHIMIZU Satoru||Visiting Associate Professor, Graduate School of Engineering||Wireless Communication|
|KIEN NGUYEN||Assistant Professor, Graduate School of Engineering||Communicating System|
In this project, we have developed a new approach called "Multiperspective understanding of cognitive behavior" to understand and predict psychological and behavioral traits efficiently and accurately beyond the limits of conventional cognitive science methods. We organized three groups to measure and analyze multilayered data such as various physiological data (Internal observation group), behavioral data (Behavioral observation group), and environmental data (Environmental observation group) in parallel with experiments based on cognitive science methods. A data mining analysis using machine learning was also performed to investigate the relationship between the non-explicit factors. In the laboratory environment, we also developed an experimental environment by the use of virtual reality (VR) and augmented reality (AR) technologies in order to better understand the psychological and behavioral characteristics of humans in their daily environment. The achievements of each of three groups are the followings.
The Internal observation group collected data on psychological characteristics and physiological and behavioral indicators, and examined the relationship between psychological characteristics such as perception, cognition, emotional judgment, and decision-making. We found that arousal affective responses and endogenous attention improve the temporal accuracy of vision, that respiratory phase fluctuates performance on both exogenous and endogenous attention tasks, and that in perceptual extraction of an overview of color distributions in the environment, the greater the color variance, the more the perceived color is displaced in the direction of the most vivid color elements. In addition, we developed a method to investigate in detail the relationship between the sensibility that finds the lecture content "interesting" and physiological indicators such as heart rate waveforms, and implemented it in an information terminal application.
The Behavioral observation group studied theories and data collection methodologies by the use of computer models to elucidate the intentions and purposes of actions and the characteristics of knowledge and thinking used for them. In deep learning, we developed and validated a data extension that automatically generates data using a convolutional network (CNN). We found that observational behaviors influenced by observers' personality traits form different impressions of the same face, that the visibility of pigmented spots is influenced by location and distributional shape, that reddish faces appear brighter than yellowish faces in Japanese subjects regardless of race, that texture appearance can be quantified by luminance histogram statistics and photometric parameters, that the naturalness of image color and lightness contrast influences the vividness correction of appearance for saturation changes, that shared belief construction via verbal behavior during collaborative work is done via retrospective correction, construction, and reconstruction, and that non-explicit reference points influence the process of determining the framework of speech, even unconsciously. We developed a method for collecting human flow data using the ultra-wide band (UWB) and constructed a behavioral model for evacuees with different purposes.
The Environmental observation group has developed technologies for collecting and analyzing environmental data in real time and for analyzing cognitive performance based on video recordings. For the development of the Spiking Neural Network (SNN), which is a power-saving and brain-like distributed information processing system, we proposed a method of using infrared communication for coupling between neurons and implemented a Bluetooth Low Energy (BLE) wireless communication module. We confirmed that the number of people in a laboratory can be predicted from the responses of various sensors by using STDP (Spike-Timing Dependent Synaptic Plasticity) for learning SNNs.
In collaboration with the Internal observation group and Behavioral observation group, we found a tendency for higher-order cognitive categorization to conserve cognitive resources for learning without recognizing the structure of the actual stimuli. In order to conduct efficient cognitive experiments in a real environment, we have conducted experiments using VR and AR technologies in collaboration with the Internal observation group and Environmental observation group, and confirmed that there is a linear relationship between the number of objects and the search time for a three-dimensional arrangement, and that there is a synergistic effect between audiovisual perceptions. In collaboration with the Behavioral observation group and Environmental observation group, we developed an evacuation guidance application that combines human flow estimation with reference to individual personality and behavioral characteristics.