2016 International Conference on CYBERWORLDS
Keynote and invited talks
A modelling paradigm for artificial and virtual reality environment
by Professor Brian Wyvill (University of Victoria, Canada, Vice-President ACM SIGGRAPH)
After the advent of 3D scanners and GPUs, the triangle mesh became the dominant paradigm used by many researchers for representing models in virtual environments. This paradigm is very efficient for rendering, but has a number of shortcomings. These models have no implicit knowledge, such as expressing volume or topology. They do not have a built in way of knowing about other objects in the environment, and finding contact between models is inefficient. In this talk I discuss the problems with current technology and propose a paradigm that could solve some of the problems in the future.
The new proposed paradigm is based on implicit modelling plus an embedded knowledge of physics. In the talk I will discuss some of the research that has led up to the new paradigm, including a method for speeding up traversal of a solid modelling tree by exploiting the SPMD programming model, reducing the number of memory-reads, and ensuring that memory is retrieved in a predictable fashion. Other relevant earlier work includes the design of functions that define generalized blends using both gradient and distance, and applications such as implicit skinning, that are pre-cursor to the current research.
After obtaining a doctorate in Computer Graphics in 1975 from Bradford University, Brian Wyvill worked as a research fellow helping to build a computer animation system at the Royal College of Art in London with Colin Emmett. Colin and Brian used the system for a project for the BBC, and in the Twentieth Century Fox film, "Alien" (a very early example of computer animation in a Hollywood movie.) Since the early 1980's, together with his students and fellow researchers, including brother, Geoff Wyvill, he built the GraphicsJungle research group at the University of Calgary, computer science department, where he was a full professor. In the last ten years work has centred around the BlobTree implicit animation system. In 2007 Brian moved to the University of Victoria, where he and his students continued with research in implicit modelling. Brian has many recent collaborators including Paul Sabatier, INPG and Lyon I Universities in France. Recent collaborations have led to a number of breakthroughs in implicit modelling including the "gradient blend" and "implicit skinning". Brian is on the Eurographics executive committee and is vice-president of ACM SIGGRAPH.
More info can be obtained at: http://webhome.cs.uvic.ca/~blob/
Semantic mobile social networks
by Professor Chin-Wan Chung (KAIST, Korea)
The Web has become a part of our lives for obtaining information for both professionals and ordinary people. Social networks have appeared in the evolution of the Web. With the wide spread of mobile devices and the emergence of Web 3.0, known as the semantic Web, the fusion of new technologies became necessary to enable better acquisition and utilization of information. This presentation discusses the research on the technologies for mobile social network based services in the Web 3.0 environment. We focus on technologies on data management, search, and analysis for making services intelligent, automated, and personalized. For the data management, the data is modeled and managed as the ontology to provide the inference capability and to support Web 3.0. The search deals with the semantic search, social search, and location based search. The analysis includes network structure analysis, user characteristics analysis, and social influence analysis. The research resulted in a number of new concepts and efficient solutions. A system developed in the process of the research is also introduced. We developed a system named SIMSON (SemantIc Mobile SOcial Network) which consists of a platform and several key applications. SIMSON has been registered as cellphone Apps, and downloaded over 3,000 times.
Chin-Wan Chung is Professor in the Chongqing Liangjiang KAIST International Program at CQUT, China and Professor Emeritus in the School of Computing at KAIST, Korea. He received his Ph.D. degree in computer engineering from the University of Michigan, Ann Arbor, USA. He was Senior Research Scientist and Staff Research Scientist in the Computer Science Department at the General Motors Research Laboratories, Warren, USA. He has published over 130 papers in the international journals and conferences, and registered 25 international and domestic patents. He received the best paper award at ACM SIGMOD in 2013. He has been on program committees of major international conferences such as ACM SIGMOD, VLDB, IEEE ICDE, WWW, and ICWS. He was Associate Editor of ACM TOIT, and is currently Associate Editor of WWW Journal. In 2014, he was the General Chair of the International WWW Conference. His current research interests include Web, social networks, graph databases, and multimedia databases.
More info can be obtained at: http://islab.kaist.ac.kr/chungcw/
Human-Computer Interaction in Baidu Research
by Dr. Jiawei Gu (Institute of Deep Learning (IDL), Baidu Research)
In this talk he will introduce the activities of HCI in Baidu, experience to design, prototype and create novel digital experience, industrial design and nature user interface (NUI) for intelligent paradigms (including wearable device, intelligent transportation, home automation, robotics, IoT) utilizing hardware and software technologies of artificial intelligence, deep learning and big data, project includes BaiduEye, DuBike, DuLight, FaceYou, envisioning in 10 years many of the electronic consumer products will become a some form of robots, being able to sense the environments, interact with people, and make control decisions, in order to improve the quality of human life and to help us see what a future “AI-enabled” society will be like. It turned out that most of the projects are impactful in China.
Jiawei Gu is the Principle Architect, leading the Human-Computer Interaction (HCI) team at Institute of Deep Learning (IDL), Baidu Research. Currently he is responsible for directing a team to design, prototype and create novel digital experience, industrial design and nature user interface (NUI) for intelligent paradigms . Previously he worked in Microsoft Research, mainly focus on tangible and embodied I/O interfaces and solutions for next generation Microsoft products, including Windows Surface, Xbox Kinect and Windows 10, meanwhile he has been awarded 22 US invention patents and more than 120 domestic patents, acting as corporation instructor of Stanford ME310 Global New Product Design Innovation Program. His experience and interest lies at the intersection between user-centered design and applied research.
Neuroscience based design: fundamentals and applications
by Dr. Olga Sourina (Fraunhofer IDM@NTU, Singapore)
Neuroscience-based or neuroscience-informed design is a new area of Brain-Computer Interaction (BCI) application. It takes its roots in study of human well-being in architecture and human factors study in engineering and manufacturing. We propose and develop an Electroencephalogram (EEG)-based system to monitor and analyze human factors measurements of newly designed systems, hardware and/or working places. The EEG is used as a tool to monitor and record the brain states of subjects during human factors study experiments. In traditional human factors studies, the data of mental workload, stress, and emotion are obtained through questionnaires that are administered upon completion of some task or the whole experiment. However, this method only offers the evaluation of overall feelings of subjects during the task performance and/or after the experiment. Real-time EEG-based human factors evaluation of designed systems allows researchers to analyze the changes of subjects’ brain states during the performance of various tasks. Machine learning techniques are applied to the EEG data to recognize levels of mental workload, stress and emotions during each task. By utilizing the proposed EEG-based system, true understanding of subjects working pattern can be obtained. Based on the analyses of the objective real time data together with the subjective feedback from the subjects, we are able to reliably evaluate current systems/hardware and/or working place design and refine new concepts of future systems. We describe real-time algorithms of emotion recognition, mental workload, stress recognition from EEG and its integration in human-machine interfaces including cadets/captains stress assessment systems.
Olga Sourina is Principal Research Scientist at FraunhoferIDM@NTU Research Center and a head of Cognitive Human Computer Interaction research lab. She received her Master of Science degree from Moscow Engineering Physics Institute (MEPhI), Russia in 1983, and her PhD in Computer Science from Nanyang Technological University, Singapore in 1998. Her research interests are in brain-computer interfaces including real-time emotion, stress, vigilance and mental workload recognition, neuroscience-based design, visual and haptic interfaces, serious games, visual data mining and virtual reality. Dr Sourina has more than 130 publications including more than 40 research papers in international refereed journals and 3 books. She presented more than 70 papers and gave 15 invited and keynote talks at International conferences. She is a member of program committee of many international conferences including conferences on Cyberworlds. She is a senior member of IEEE, a member of Biomedical Engineering Society and a member of International Organization of Psychology.
More info can be obtained at: http://www.ntu.edu.sg/home/eosourina/