Research

Last time updated on August 25, 2022


Current Projects    
     
  Function-based Shape Modeling and Visualization
 
We use mathematical definitions or procedural representation to define geometry, visual appearance and physical properties of virtual objects. We work with FReps, algebraic surfaces, implicit surfaces, CSG solids, volumetric objects, as well as parametric curves, surfaces and solids. We have developed FVRML/FX3D — function-based extension of Virtual Reality Modeling Language (VRML) and Extensible 3D (X3D) which allows for using mathematical formulas and function scripts straight in the codes of VRML and X3D for defining practically any type of geometry, appearance as well as physical properties. We are also developing a multiplatform interactive modelling tool ShapeExplorer for teaching and research on function-based computer graphics and visualization.
     
  New Human-Computer Interaction
 
We add new modality to common interactive shape modelling and virtual prototyping and allow for combining visual, audio, haptic and free-hand interactions in virtual environments and simulations. Hand motion is captured by different ways using haptic and optical tracking devices. We design and implement a set of robust and efficient interactive hand gestures suitable for various engineering designs (virtual prototyping) and crafts (freeform shape modelling). We simulate existing hand-made assembling and modelling processes so that the required motor skills can be both trained and applied in the simulations. We also work on new ways evaluating the quality of interaction experience.
     
  Sound of Geometry and Geometry of Sound, Immersive Geometry
 

This project studies problems of abstract data sonification including various aspects of using AI for creating and processing sound in human-computer interaction and data analysis. We perform research in the following directions:
/1/ Sound and music generation with deep learning techniques;
/2/ Sound and music visualization with deep learning techniques;
/3/ Using sound as a new modality in human-computer interaction, interac-tive computer graphics, virtual reality and data visualization;
/4/ New ways of generating music with computers, e.g., by using optical tracking devices to simulate the principles of playing the theremin.

     
  Cyberworlds
 
Created intentionally or spontaneously, cyberworlds are information spaces and communities that immensely augment the way we interact, participate in business and receive information throughout the world. They seriously impact our lives and the evolution of the world economy by taking such forms as social networking services, 3D shared virtual communities and massively multiplayer online role-playing games. The first Workshop on Synthetic Worlds was organized by late Professor T.L. Kunii in 1993 (University of Aizu, Japan) with the proceedings published in a book Cyberworlds (Springer). Since 2003, I am coordinating the annual International Conferences on Cyberworlds.
 
Past Projects
    Feasibility Study on Automatic Detection of Anomalies in the Captured Images with Limited Training Data
 
We devised an AI algorithm which, based on a limited number of mages of the surfaces of pharma tanks that are free of anomalies, is able to identify images with anomalies and /2/ locate anomalies within the images.
     
    Automatic Optical Inspection
 
Funded by Delta-NTU Corporate lab, we investigated theory and practice of building a generic framework for the Automatic Optical Inspection of electronic components based on deep learning methods which work for different types of components and defects in the production line. 
     
    Tangible Images and Haptic Video Comunication
 
We added haptic interaction modality to visual rendering and common video communication across the internet. Haptic forces are either retrieved from images or video or efficiently exchanged as asynchronous data streams across the internet.
     
    Interactive Free-form Shape Modeling
 
Interactive modification of the function model with concurrent visualization of the respective part of it  lets us provide both the interactivity and any required level of detail leading to photo-realistic appearance of the resulting shapes. Refer to Function-based Shape Modeling project.
     
    Function Representation (FRep)
 
This method proposes that geometric shapes are represented with the inequality f(x,y,z)0, where the value of the real function f is positive for the points inside the shape, equal to zero on its border and negative outside the shape. Refer to Function-based Shape Modeling project.
     
    Interactive Segmentation of MRI Brain Data
 
Novel visualization algorithms developed specifically for segmentation purposes have been proposed along with a method for 3D interactive haptic correction of brain segmentation errors introduced by the fully automatic segmentation algorithms.
     
    Scientific Visualization
 
3D visualization of the electroencephalograms shows how the electrical signal changes through time. Its size and appearance visually reflect the brain activity. We developed an interactive program which visualizes the signals as time-dependent 3D surfaces around the 3D human head.
     
    Virtual Orthopedic Surgery Training
 
While currently available VR surgery systems usually require expensive hardware and software, we developed a desktop VR orthopedic surgery training system that can run on commonly available personal computers. Refer to Tangible Images and Haptic Video Comunication project.
     
    Parallel Computing for Visualization and Shape Modeling
 
Parallel computing is used for computer animation rendering and interactive shape modeling based on using implicit functions. We have also devised and implemented a loseless 3D compression algorithm that enables for transferring across the Internet gigabytes of data.
     

    Virtual Campus
 
Virtual Campus is a shared collaborative VRML model of the campus of Nanyang Technological University in Singapore. It can be accessed from any Internet-connected personal computer running MS Windows. This project continues as Cyberworlds project.

 

Completed Funded Projects:

  • Pharma Innovation Programme Singapore (PIPS) Research Grant A20B3a0150 Feasibility Study on Automatic Detection of Anomalies in Captured Images with Limited Training Data, S$287,151, 01.04.2021 - 31.08.2022
  • NRF-Delta-NTU Joint Corp Lab Research Grant SMA-RP4 Virtual Factory 4.0 Process Monitor Platform, S$1,061,000, 01.01.2017 - 31.12.2019

  • Ministry of Education of Singapore Research Grant MOE T1 RG17/15 Haptic Interaction with Images and Videos, S$100,000, 1 year, 2015-2016
  • Ministry of Education of Singapore Research Grant MOE T2 2011-T2-1-006 Collaborative Haptic Modeling for Orthopaedic Surgery Training in Cyberspace, S$630,874, 2011-2015
  • Singapore National Research Foundation Interactive Digital Media R&D Program Research Grant NRF2008IDM-IDM004-002 Visual and Haptic Rendering in Co-Space, S$1,248,000, 2008-2012
  • UK Engineering and Physical Sciences Research Council (EPSRC) travel funding EP/G067732 Function Based Geometry Modelling Within Visual Cyberworlds, GBP19,710, 2 years, 2009-2011

  • SBIC Innovative Grant RP C-012/2006 Improving Measurement Accuracy of Magnetic Resonance Brain Images to Support Change Detection in Large Cohort Studies, S$605,700, 4 years, 2007-2011

  • NTU Research Grant RG27/06 Investigation, Modeling and Quantification of Brain Response to External Stimuli, S$86,692, 3 years, 2007–2010

  • Ministry of Education of Singapore Teaching Excellence Grant Cyber-learning with Cyber-instructors, S$114,100, 2 years, 2007-2009

  • Singapore AE@SG R&D Alliance Research Grant IDA/WP2 Grid-enabled Graphics and Animation Applications, S$470,000, 3.25 years, 2004–2007

  • NTU Research Grant RG01/04 Interactive function based shape modeling, S$35,521, 3 years, 2004-2006

  • NTU Research Grant RG35/96 Real time dynamic simulation—Virtual Campus, S$73,700, 3 years, 1997-2000