View Issue Details Jump to Notes ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0011596ParaViewBugpublic2010-12-10 14:172011-09-01 13:31
ReporterGreg Abram 
Assigned ToAndy Bauer 
PrioritynormalSeverityminorReproducibilityalways
StatusclosedResolutionfixed 
PlatformIntelOSLinuxOS VersionRHEL8
Product Version 
Target VersionFixed in Version3.12 
Summary0011596: Errors in Coprocessing
DescriptionI've included a "simulator" (cop.C) which follows the Coprocessing_example from the wiki. In each of 2 processes in an MPI run, it creates a partition of an 8x8x8 structured dataset (using vtkImageData), dividing along Z at 3. Each process then writes out its partition explicitly, and then hands it to the co-processor.

The co-processor ("vol.py") is produced using the plugin, and consists of an input that is passed directly to the Parallel Image Data Writer. I created this using an unpartitioned 8x8x8 vtkImageData object ("single.vti").

When I run the simulator, two sets of two vti files are created - one (part_{0,1}.vti) that the simulator wrote explicitly, and one (volume_0_{0,1}.vti) that is produced by the co-processor as partitions of volume_0.pvti. If you look at the extents of the explicitly written partitions, you see 0,7,0,7,0,3 and 0,7,0,7,3,7 - which I would expect. However, if you look at the extents in the co-processor produced .vti's you see 0,7,0,3,0,3 and 0,7,3,7,3,7, and the .pvti file can't be input to PV.

"cop8.C" is a similar "simulation" that divides the same 8x8x8 space along each axis at 3. "contour.py" is a co-processing script that runs a contour filter and produces a .pvtp file. If I load that and overlay it with the bounding boxes of the 8 .vti files that are explicitly written by cop8.C, I get whats seen in bad.png. If I run Contour on the 8 .vti's (at 13.5) I get good.png, which is what I'd expect. Note that at least some of the bad contour fragments are at least in the correct plane.
TagsNo tags attached.
Project
Topic Name
Type
Attached Filestgz file icon coprocessing-bug.tgz [^] (62,766 bytes) 2010-12-10 14:17

 Relationships

  Notes
(0024245)
Andy Bauer (developer)
2010-12-17 15:15

The problem is that you need to use an extent translator to set the extents on each process (probably vtkPVExtentTranslator). For now you can replace:
id->SetExtent(sx, ex, sy, ey, sz, ez);
id->SetWholeExtent(0, res-1, 0, res-1, 0, res-1);
with:
id->SetDimensions(res, res, res);

You'll also need to specify the point/cell data completely on each process. This will result in parallelism in processing the data but not data parallelism. Another option would be to use a multiblock dataset to get data parallelism.

I'm leaving this bug open since the above doesn't fix the problem, it only gets around in in an ad hoc way. Meanwhile, I'll keep looking for a more elegant solution to this bug.
(0024372)
Greg Abram (reporter)
2011-01-03 18:10

I can't really specify the entire grid on each process - the goal here is to co-process with a distributed-memory sim in which each process is responsible for a partition of the data. To get the entire grid on each node I'd have to do a massive AllGather, which would be prohibitive.

So I've been trying the MultiBlock approach. How would that go? My first try was to create a MultiBlock dataset on each sim process that only contains the partition that the process owns, and that crashes with a generic MPI error. It works, though if each process' MB contains a member for every partition, but which are empty for all but the particular partition that the process owns. However, that results in shading discontinuities at the partition boundaries, and I don't see a way to get it to grok ghost zones.
(0024458)
Andy Bauer (developer)
2011-01-06 11:05

First off, Berk and I are working on this and expect to have this fixed in a couple of days. If you can wait then I'd suggest that. Otherwise you can try the following:

The multiblock solution with uniform grids will result in artificial boundaries since the multiblock doesn't know how things connect. If you do use the multiblock then I think the best way is to have the number of blocks equal to the number of processes and then do multiblock->SetBlock(myrank, imagedata); as you mentioned above. You may be able to get rid of the artificial boundaries by doing a merge blocks and maybe a clean to grid.

Another option is to convert your uniform grid to an unstructured grid. Again, this is a waste of memory but at least will scale with the number of processes.
(0024713)
Andy Bauer (developer)
2011-01-15 10:12

This should now be fixed in ParaView head. The commit SHA is d9d611f. If everything works properly, please mark the issue as closed.
(0025412)
Alan Scott (manager)
2011-02-11 21:44

Closed untested.

 Issue History
Date Modified Username Field Change
2010-12-10 14:17 Greg Abram New Issue
2010-12-10 14:17 Greg Abram File Added: coprocessing-bug.tgz
2010-12-11 11:41 David Partyka Assigned To => Andy Bauer
2010-12-11 11:41 David Partyka Status backlog => tabled
2010-12-17 15:15 Andy Bauer Note Added: 0024245
2011-01-03 18:10 Greg Abram Note Added: 0024372
2011-01-06 11:05 Andy Bauer Note Added: 0024458
2011-01-15 10:12 Andy Bauer Note Added: 0024713
2011-01-15 10:12 Andy Bauer Status tabled => @80@
2011-01-15 10:12 Andy Bauer Fixed in Version => Development
2011-01-15 10:12 Andy Bauer Resolution open => fixed
2011-02-11 21:44 Alan Scott Note Added: 0025412
2011-02-11 21:44 Alan Scott Status @80@ => closed
2011-09-01 13:31 Utkarsh Ayachit Fixed in Version Development => 3.12


Copyright © 2000 - 2018 MantisBT Team