Transcript deussen.ppt
Computer-Generated Pen-and-Ink Illustration of Trees Oliver Deussen & Thomas Strothotte University of Magdeburg The Idea • Standard NPR papers have not specifically focused on foliage • Current NPR plant models are generic… • …Even though we can create specific species models • Let’s start with an exact model and remove unnecessary geometry Giving Away the Ending Tree model generated with the xfrog system Pen-and-Ink illustration of the complex model Standard Issues in Traditional Tree Illustration (1) • The tree skeleton (trunk) is usually drawn up to the 2nd branching level – Prominent silhouette lines – Crosshatching for shading • How do you determine silhouette? • Where do you place the crosshatches? • How high is high enough? Standard Issues in Traditional Tree Illustration (2) • Foliage is composed of 3 areas: – Top is usually well-lit, so minimal detail shown – Half-shadow areas show more foliage detail – Deep shadow regions (many techniques used) • What shape are the leaves? • What shading techniques to use (black, increasing leaf silhouette detail, thicker lines)? And So, To Begin... • 2 Files for stored model – One for the trunk (to 2nd branching level) – One for the leaves (pos. & normal) • Trunk is drawn using traditional NPR algorithms • Leaves are drawn with some primitive and assigned detail based on depth-difference algorithm The Tree Skeleton • Uses Markosian’s or Rakar & Cohen’s silhouette algorithms • Crosshatches placed with the Difference Image algorithm or a Floyd Steinberg variant The Fun Part: Leaves (!) • System uses Zero-Order derivatives for determining important lines on a surface • Outlines are drawn if the maximal depth difference of the leaf to neighboring leaves is above a certain threshold – Draw the leaves as solids & get depth buffer – For each pixel, find out how far it is in front of its neighbors – Use that data to draw the leaves Big Ol’ Equation Time • d0 & d1 = min and max values represented in the depth buffer • z0 & z1 = corresponding depth values of near & far clipping planes in camera projection • d = depth value [0..1] • z = depth in camera coordinate system More Equation Stuff • Depth differences can be computed for eye coords or directly for depth buffer – Latter method results in larger changes • Depth difference threshold is determined by a percentage of the depth range of the tree • Other numbers: – d0 = 0 & d1 = 65535 – z0 = 1 & z1 = 11 (approx. for real trees) Varying Equation Inputs Tree 1 with varying primitive size & threshold - Disc size = 0.15 - Threshold = 1000 - Disc size = 0.7 - Threshold = 2000 Free LOD • Essentially, here’s what you get: – Since depth buffer differences are non-linear, you get detail up front & less in the back – Changing ratio of z1 to z0 alters non-linearity – Primitive size based on the depth of the tree Primitive size & threshold constant Primitives enlarged with distance Primitive Choices • Primitive shapes can also be altered for a more accurate representation of real leaves – one could use an actual 3D model... – …or use a subset of possible views Shadow in black, threshold = 1000 Shadow by detail, threshold = 6000 The Software Framework • 1: Determine depth differences – Interactive: both stem & foliage done together • 2: Software shadows are created & stored • 3: Draw the pixels above the threshold • For higher-quality images: – Vectorize the stem & foliage bitmaps separately • least-square fitting • index buffer (stores primitive ID) – Draw polygons by spline interpolation Results & Measures • (a): 13,200 elliptic primitives - 10 seconds (SGI Octane (Maximum Impact)) • (c): 200,000 leaves reduce to 16,200 particles • (f): 90,000 tree particles, 23,000 grass particles - 1 minute • Interactive: 3 trees of 20,000 particles each and 25,000 ground particles at 3 frames/sec (SGI Onyx 2) Conclusion & Questions • Future Work: – Crosshatching on the leaves – Needs continuous LOD – Cartoons & other non-realistic effects • My Questions: – What, exactly, is d? – Advantages over previous (non-speciesspecific) efforts? – 3 frames-per-second. Gee, that’s helpful.