Our summer research applications for 2019 are open! See below for more information.
One of the reasons students attend Penn is for access to faculty and research opportunities, yet few undergraduates take advantage of this opportunity. While classroom experience is essential, so is the opportunity to create new knowledge while examining the unknown. The Digital Media Design program is closely related to the Center for Human Modeling and Simulation and the ViDi Center for Digital Visualization. Research projects are undertaken by heterogeneous teams of graduate and undergraduate students and visitors. Undergraduates who contribute in a substantial way become co-authors in publications. In the past few years, summer researchers in HMS and ViDi have had papers published in notable computer graphics academic conferences and journals.
From year to year the internship topics vary depending on funding, research needs, and student interests. The projects have overall faculty guidance, but students are expected to learn new software systems, do extensive programming, contribute to archival materials (software, documentation and written papers), and orally present their work to others. Posters and participation in department-wide research demonstrations is strongly encouraged.
Many of the participating undergraduate students from previous summers are recognized through authorship in published papers.
- The UG students include 10 women (Emiliya Al Yafei, Mabel Ogiriki, Elissa Wolf, Jennie Shapira, Fannie Liu, Teresa Fan, Nicole Nelson, Samantha Raja, Rebecca Fletcher, Nancy Yu, Desiree Velázquez-Rios) and 18 men (Josh Nadel, Youssef Victor, Charles Wang, Kai Ninomiya, Francisco Garcia, Nathan Marshak, Yu Wang, Max Gilbert, Daniel Garcia, Matthew Jones, Robert Mead, Dan Markowitz, Ian Perera, Matthew Croop, Jeremy Cytryn, Jonathan McCaffrey, Matt Kuruc, Vijay Nair).
- Collectively they account for 25 publications: 6 in journals, 18 in conference proceedings, and 1 conference poster. UG students are first authors in 5 of these. Elissa Wolf was first author in a Presence, MIT Press journal paper.
- Francisco Garcia co-authored 5 papers, Jennie Shapira co-authored 3, and Kai Ninomaya, 2.
- Francisco Garcia, Jennie Shapira, Max Gilbert, and Nathan Marshak contributed to the book 'Virtual Crowds: Steps Toward Behavioral Realism' (Morgan & Claypool, 2015).
- 7 students, Fannie Liu, Samantha Raja, Kai Ninomiya, Francisco Garcia, Yu Wang, Max Gilbert, and Ian Perera, continued with computer science graduate studies, and of these, 3 students, Francisco Garcia, Yu Wang, and Ian Perera, embarked on PhD degrees in Computer Science
2019 Research Topics include:
Norman Badler
Spatialized Performance And Ceremonial Event Simulations: SPACES
SPACES is to be a parameterized, spatially and temporally situated, Augmented Reality simulation of large-scale public ceremonies. In pre-historic contexts these activities must be hypothesized from artifacts, architectural, and geophysical remains, documentary sources, and cultural context. SPACES should create plausible variations that may be both visually (qualitatively) and quantitatively assessed. This approach is fundamentally different from one-off “reconstructions”, as explorations of variations may be essential to determine which performance possibility is more compatible with evidence. In contemporary settings, SPACES could be used to design organized public parades and celebrations.
Although computerized crowd simulations exist, most effort has been directed toward low-level navigation, collision avoidance, and trajectory realism. “Higher-level” organization is left to user discretion, artistic decisions, or creative goals. Crowds are often behaviorally homogeneous with only vague overall purpose.
SPACES will center on AR participation in processional environments: what activities occur where, about how long they last, what objects agents carry, use, or play, sound and motion coordination, and interpersonal interactions. We will develop a user interface to control such parameters. SPACES will use Magic Leap AR through UnReal via Blueprint procedures.
The SPACES AR app will allow a user to embed herself as a crowd participant and active performer. AR requires highly realistic graphic environments, and responsive behaviors in the other characters to cement the sense of cultural presence: “the feeling of being and making sense there together.” What better way to experience the ethos of a bygone culture than by being embedded in its public ceremonial practices?
Chenfanfu Jiang
"Multi-physics" phenomena involves in the simulation of solids, fluids, through their different phases, scales and interactions. There is not a numerically perfect physics-based simulation scheme that is always suitable for all scenarios. We desire to hybrid strengths of different methods to enable animating complex material interactions. The summer projects will target at several aspects for multi-physics simulation, with a focus on the Material Point Method and Finite Element Method:
- (1) High performance GPU optimization of existing schemes;
- (2) Novel numerical interrogators for efficient numerical simulation of hard, constrained cases;
- (3) Coupled, spatial and temporal adaptive hybrid simulation of various schemes;
- (4) The look into frictional contact, and resolving algorithm-inherent, geometry-related problems.
The student works with state-of-the-art solid/fluid C++ solvers and develops new features and experiments with research ideas. The student will also work with 3D software such as Houdini for modeling and rendering simulation geometry. C++ background and experience with some physics-based simulation is required. Research accomplishments lead to publications and collaborations with animation/visual effect industry on state-of-art simulation techniques.
Stephen Lane
Development of Augmented Reality Applications for Large Screen Displays
- The goal of this project is to create a compelling Unity demo of a user holding/wearing an android-based smartphone interacting with content shown on a large screen display. Both a handheld mode and a head-mounted display mode (using the GearVR HMD) will be developed using the Google ARcore SDK and the Vuforia7 computer vision plugin. The project involves estimating the position and orientation of the smartphone/HMD with respect to the display screen (and optionally the user's head with respect to the handheld smartphone) in order to create new multi-user game experiences and crowd-sourced augmented reality application content.
Interactive Authoring of Augmented Reality Task Content
- The goal of this project is to develop an innovative in situ authoring system that transforms the way augmented reality content is created for many training and education tasks. This will be accomplished by capturing the movements, actions and verbal descriptions of an instructor, trainer, designer, etc. as they actually perform a task. The system will then automatically segment the captured task data into goal-directed subtasks and adapt it for use in a "Helping Hands" application to automatically prompt and guide users while they perform the task themselves, taking into account the specifics of the task environment (physical layout, location of parts and objects, etc.).
Augmented Reality Enhancement of Medical Simulation and Training Applications
- The goal of this project is to develop a proof-of- concept Augmented Reality (AR) system for medical simulation and training applications (such as anesthesiology and neurosurgery) that allows participants to see a computer-generated 3D model that appears lifelike and dynamic while they interact with the actual physical model in a haptically realistic manner. Computer vision-based object recognition and pose estimation techniques will be used to register the computer-generated 3D models with their corresponding physical counterparts.
Positions are limited by available funding. Decisions and offer letters will be made approximately April 12, 2019.
Recent Publications Featuring Undergraduate Students
"Recreating Pre-Columbian life in the Baures region of the Bolivian Amazon." | |
"Crowd simulation incorporating thermal environments and responsive behaviors." | |
"The Distribution of Carried Items in Urban Environments" | |
"Planning Approaches to Constraint-Aware Navigation in Dynamic Environments" | |
"Generating a Multiplicity of Policies for Agent Steering in Crowd Simulation" | |
"ADAPT: The Agent Development and Prototyping Testbed" | |
"Planning Approaches to Constraint-Aware Navigation in Dynamic Environments" | |
"An Event-Centric Planning Approach for Dynamic Real-Time Narrative" | |
"The Effect of Posture and Dynamics on the Perception of Emotion" | |
"Pedestrian Anomaly Detection Using Context-Sensitive Crowd Simulation" | |
"Animating Synthetic Dyadic Conversations With Variations Based on Context and Agent Attributes" | |
"A Data-Driven Appearance Model for Human Fatigue" | |
"Parameterizing Behavior Trees" | |
"Human Model Reaching, Grasping, Looking and Sitting Using Smart Objects" | |
"Fruit Senescence and Decay Simulation" | |
"CRAM It! A Comparison of Virtual, Live-Action and Written Training Systems for Preparing Personnel to |
See all CG@Penn publications here .