<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ascii">
<TITLE>Message</TITLE>
<META content="MSHTML 6.00.2800.1264" name=GENERATOR></HEAD>
<BODY>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2>Would you believe
there are three separate interfaces to vtkProp3D, the base class for vtkActor,
which all allow you to transform an actor? In order to assign actor
transformations based on haptic input, you poke the transformation into
vtkActor, or, rather, its base class vtkProp3D. Then you can use a
vtkCellPicker to find what point is picked. vtkCellPicker knows how to
apply these transformations in reverse.</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2>1. You can take
your tranformation from haptic input and set the vtkActor's position and
orientation directly, using converstions you must already have seen in
vtkTransform or vtkMatrix4x4. You would do this at every
timestep.</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2>2. You can take
your transformation, in whatever form you have it, and set the actor's
UserMatrix property directly. This can be a shorthand way of doing the
same as setting position and orientation separately, although the two choices
compound each other.</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2>3. If you call
vtkActor::SetUserTransform, then this works slightly differently. One
difference between a vtkTransform and a vtkMatrix4x4 is that a vtkTransform
remembers where it came from. It is a live link. If you call
SetUserTransform with a pointer to the same vtkTransform as that set by your
haptic input, or even a pointer to a vtkTransform derived from your haptic input
by chaining transforms, then the vtkActor will be updated for you every time you
get more haptic data.</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2>For an example of
how to convert from the vtkActor's coordinates to world coordinates, you might
look in vtkPicker::Pick()'s implementation, or you might let vtkCellPicker do
the work for you if that's possible. (It would be with some modifications
on your part to excise the 2D mousing code.) The best advice I've seen on
how to do the conversions, aside from reading Feiner, van Dam, et al, is from
the VR Juggler guide's intro to matrices, which suggests writing matrices in the
form w_M_a to represent a matrix translating from actor space to world space, in
order to keep your transformations straight. Instead of using the
terminology of model, view and perspective transformation matrices, VTK has
actor coordinates, world coordinates and screen coordinates. It's like
this:</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2> screen coords
- s_M_w - world coordinates - w_M_a - actor
coordinates</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2>where the user
transform you put in the actor is a transformation from the world to the
actor frame, a_M_w, and the matrix s_M_w is the vtkCamera's "composite
perspective transform." This is a combination of what graphics people call
the view and perspective transformation matrices, and vtkCamera also offers you
what is usually called the view matrix in its "perspective
transform."</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2>Drew
Dolgert</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial size=2>Cornell Theory
Center</FONT></SPAN></DIV>
<DIV><SPAN class=107593415-28102003><FONT face=Arial
size=2></FONT></SPAN> </DIV></BODY></HTML>