|Promises and Pitfalls of Augmented Reality and Telepresence
Fueled by inexpensive displays and sensors from smart phones and tablets, virtual reality seems to have finally arrived, to widespread media coverage. Less well known is the previous VR boom, in 1990s, that was also launched by technological developments—affordable real-time PC graphics and miniature TVs. After a decade of unfulfilled expectations, the 1990s VR boom faded. The current boom, with promise of mass market sales, has substantially higher expectations and perhaps higher risks. One of the most popular scenarios for VR continues to be telepresence, proposed at least since Ivan Sutherland’s 1965 “Ultimate Display” and 1974 Star Trek’s Holodeck. The major advances of recent years have brought these dreams closer to realization, advances in 3D scene acquisition and reconstruction, advances in augmented and virtual reality displays, advances in wide area tracking. This talk will review some of these advances, and assess the difficulties and risks of the remaining challenges.
Henry Fuchs (PhD, University of Utah, 1975) is the Federico Gil Distinguished Professor of Computer Science and Adjunct Professor of Biomedical Engineering at University of North Carolina at Chapel Hill. He is one of three co-directors (together with Nadia Magnenat-Thalmann and Markus Gross) of the BeingThere International Research Center in Telepresence, collaboration between ETH Zurich, NTU Singapore, and UNC Chapel Hill. Active in computer graphics since the 1970s, Fuchs has coauthored over 200 papers on a variety of topics, including rendering algorithms (BSP Trees), graphics hardware (Pixel-Planes), virtual environments, telepresence, medical and training applications. He is a member of the National Academy of Engineering, a fellow of the American Academy of Arts and Sciences, recipient of the 1992 ACM SIGGRAPH Achievement Award and the 2013 IEEE VGTC Virtual Reality Career Award, and the 2015 ACM SIGGRAPH Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics.
|Towards the Expressive Design of Virtual Worlds : Combining Knowledge and Control
Despite our great expressive skills, we humans lack an easy way of conveying the 3D worlds we imagine. While impressive advances were made in the last fifteen years to evolve digital modeling systems into gesture-based interfaces enabling to sketch or sculpt in 3D, modeling is generally limited to the design of isolated, static shapes. In contrast, virtual worlds are composed of distributions or assemblies of elements which are too numerous to be created or even positioned one by one; the shapes of these elements may heavily depend on physical laws and on the interaction with their surroundings; and many of them may be animated, meaning that they should not only be designed in space, but also over time. In this talk, I will explore the recent extensions of expressive modeling metaphors such as sketching, painting, transfer and sculpting, to such complex cases. I will show that models for shape and motion need to be redefined from a user-centered perspective, and in particular embed the necessary knowledge to make them respond in an intuitive way to the user’s control gestures.
Marie-Paule Cani is a Professor of Computer Science at Grenoble University & Inria. She contributed over the years to a number of models for shapes and motion, such as implicit surfaces, multi-resolution physically-based animation methods and hybrid representations for real-time natural scenes. Following a long lasting interest for virtual sculpture, she has been recently searching for more efficient ways to create 3D contents such as combining sketch-based interfaces with user-centered 3D models. She received the Eurographics outstanding technical contributions award in 2011, an advanced grant from the ERC and a silver medal from CNRS in 2012 for this work. She was elected at Academia Europaea in 2013 and was granted the Informatics and Computational Sciences chair at Collège de France in 2014-2015. She represented Computer Graphics in the ACM Publication Board from 2011 to 2014 and is Associate editor of ACM Transactions on Graphics. Fellow of Eurographics since 2005, she is currently Vice-chair of the Eurographics association.
|The Future of Sketch: How Would Leonardo Draw Today?
From Leonardo’s anatomic sketches to Frank Gehry’s conceptual drawings to movie storyboards, sketch is basic to exploring ideas, creating new knowledge, and designing new products. Computer technology has revolutionized text, photography and music, but, somewhat surprisingly, sketch has largely remained unchanged. Today’s digital illustration packages merely simulate drawing on paper, while adding a few new capabilities, such as panning, zooming and the ability to transmit digitally. These packages do not, however, accelerate the sketching process or enhance a sketch’s value as a communication tool.
Julie Dorsey is a Professor of Computer Science at Yale University, where she teaches computer graphics. She came to Yale in 2002 from MIT, where she held tenured appointments in both the Department of Electrical Engineering and Computer Science (EECS) and the School of Architecture. She received undergraduate degrees in architecture and graduate degrees in computer science from Cornell University. In addition to serving on numerous conference program committees, she was Papers Chair for ACM SIGGRAPH 2006, Editor-in-Chief of ACM Transactions on Graphics from 2012-2015 and is an editorial-board member for Computers and Graphics and Foundations and Trends in Computer Graphics and Vision. She has received several professional awards, including MIT’s Edgerton Faculty Achievement Award, a National Science Foundation Career Award, and an Alfred P. Sloan Foundation Research Fellowship. She is a fellow of the Radcliffe Institute at Harvard and the Whitney Humanities Center at Yale. Together with two of her colleagues, she recently helped establish the new Computing and the Arts major at Yale.