
[ad_1]
There are two potentialities that occurred to me:
1) Triangles meshes are sometimes equipped in one among strip, fan, or listed triangle lists. Since the primary two mechanically indicate adjacency of (no less than some) triangles and the final is usually optimised to maximise vertex cache reuse, these will even end in appreciable spatial coherence. This, in flip, permits Greene’s hierarchical Z-buffer to be efficient.
2) If we will take it as learn that the polygon (learn triangle) vertices have been projected into display coordinates, then setup on this case is any of the processing that takes the three triangle vertices, $V_1, V_2, V_3$ (the place every $V$ has elements, $(X^{display}, Y^{display}, Z^{display}, $ + inverse W,texture coordinates, RGB values and many others$)$ and maps them to a per-pixel analysis cooefficients.
In older programs that often meant DDA values for strolling alongside edges after which interpolating alongside the scan traces, however this steadily modified over to utilizing ‘half-plane’ analysis schemes which make parallel analysis on a number of pixels possible.
Assuming it’s the latter, then for edge equations this may be achieved by computing the inverse/adjoint of a 3×3 matrix comprising a row of the three X values, a row of the Ys and a row of 1s.
$$
start{vmatrix}
V_1^{xscreen}&V_2^{xscreen}&V_3^{xscreen}
V_1^{yscreen}&V_2^{yscreen}&V_3^{yscreen}
1&1&1
finish{vmatrix}
$$
Each row of the adjoint of the above give the coefficients of a ‘edge equation’ of the shape $d_{edge}=a.X + b.Y + c$, which can be utilized for ‘inside’ assessments.
(FWIW this may be query for https://computergraphics.stackexchange.com/ )
[ad_2]