MATRIX FACTORIZATION OF MULTIVARIATE BERNSTEIN POLYNOMIALS

Ordinary univariate Bernstein polynomials can be represented in matrix form using factor matrices. In this paper we present the definition and basic properties of such factor matrices extended from the univariate case to the general case of arbitrary number of variables by using barycentric coordinates in the hyper-simplices of respective dimension. The main results in the paper are related to the design of an iterative algorithm for fast convex computation of multivariate Bernstein polynomials based on sparse-matrix factorization. In the process of derivation of this algorithm, we investigate some properties of the factorization, including symmetry, commutativity and differentiability of the factor matrices, and address the relevance of this factorization to the de Casteljau algorithm for evaluating curves and surfaces on Bézier form. A set of representative examples is provided, including a geometric interpretation of the de Casteljau algorithm, and representation by factor matrices of multivariate surfaces and their derivatives in Bézier form. Another new result is the observation that inverting the order of steps of a part of the new factorization algorithm provides a new, matrix-based, algebraic representation of a multivariate generalization of a special case of the de Boor-Cox computational algorithm. AMS Subject Classification: 15A23, 15A27, 65D07, 65D17, 65D10, 65F30, 65F50 Received: June 25, 2015 c © 2015 Academic Publications, Ltd. url: www.acadpubl.eu


Introduction
Univariate Bernstein polynomials were introduced by Sergei Bernstein [1] in his proof of the Weierstrass approximation theorem.They have been studied extensively in the literature since then, see, e.g.[2], or a textbook on approximation theory, e.g., [3].
Bernstein polynomials consitute the basis of Bézier curves and surfaces [4], and they are commonly used as basis functions in spline constructions [5,6].One way of representing Bernstein polynomials is by matrix form as factor matrices.Such matrices are related to de Casteljau's corner cutting algorithm [7].The univariate case of Bernstein factor matrices was addressed in [8], however, univariate Bernstein factor matrices seem to have been exposed and used prior to that, see e.g.[9], where the de Casteljau algorithm is expressed on matrix form, or the method presented in [10, algorithm 12].In addition, we mention that Tom Lyche and Knut Mørken have been using a slightly different matrix representation of B-splines in their lecture notes [11] at the university of Oslo.A particularly attractive property of the de Casteljau algorithm and its matrixbased version is the convexity of computation, whereby the total accumulated error of computation stays within the convex hull of the errors made in the process of the algorithm's iterations.
The definition of ordinary Bernstein polynomials can be generalized with preservation of the convexity-of-computation property by using barycentric coordinates, see e.g.[12,13].In this paper we investigate symmetric and recursive properties of the factor matrices for higher dimensions of the polynomials' argument.Furthermore, we present an equivalent recursive definition which facilitates the construction of Bernstein factor matrices for arbitrary dimensions of the barycentric argument, and investigate some properties which follow from the layout and construction of the matrices.We are interested in these findings mainly because the factorization can be seen to correspond to the de Casteljau algorithm and to a special case of the de Boor-Cox recursion formula for B-splines.Parts of the results, including an outline of the recursive definition of the Bernstein factor matrices, were announced in [14].
The organization of the paper is, as follows.In subsection 1.1 we provide a brief description of Bernstein polynomials, the Bézier representation of polynomial curves, and univariate Bernstein factor matrices.Subsection 1.2 is concerning some properties of the algebra of square matrices.We generalize the factor matrices to the multivariate setting in section 2, via first considering the case of R 2 , and then the d-variate case.Then in section 3 we address the directional derivatives of Bernstein polynomials and Bernstein factor matrices in the multivariate setting.Section 4 covers the relevance of the factorization to the de Casteljau algorithm, followed by some examples of how the construction can be used.Proofs of some of the lemmas and theorems are provided in section 5. Finally, in section 6, we give our concluding remarks where we suggest some topics for future work.

Univariate Bernstein polynomials and curves on Bézier form
Computing the binomial expansion where t ∈ R, leads to the Bernstein polynomials [1] of degree n, which satisfy a number of important properties, including linear independence, symmetry, roots at 0 and 1 only, partition of unity and positivity (i.e., convex partition of unity) in (0, 1).They can be expressed through the following (convex) recursion formula: where = 0 and B 0 0 = 1.Since the n + 1 linearly independent Bernstein polynomials B n i form a basis for all polynomials of degree ≤ n, any polynomial curve c(t) of degree ≤ n has a unique n-th degree Bernstein-Bézier representation where the coefficients c i ∈ R d , called Bézier points [13], are elements of an affine space, i.e., the elements of R d are points with coordinates defined with respect to a frame, consisting of origin with coordinates (0, . . ., 0 ).We note that the Bernstein-Bézier representation is sometimes referred to as the B-form, as suggested by Carl de Boor in [15].The curve in (3) can be evaluated using de Casteljau's corner cutting algorithm [7] by letting c 0 i = c i and repeatedly applying the recursion formula for Bernstein polynomials in (2): where are intermediate points of de Casteljau's algorithm.This recursive convex linear interpolation between two points can be expressed in matrix form as outlined in [8].As an example, computing from right to left, for n = 3: By consecutively multiplying the matrices in (4) from right to left, the dimension of the vector on the RHS is reduced by 1 for each computation.The factor matrices are of dimension n×(n+1) given by their number of rows and columns respectively.It follows that (3) can be rewritten as where c = (c 0 , c 1 , c 2 , c 3 ) T and the factor matrices are denoted by T n (t).As noted in [10], computing the three matrices from the left results in a vector containing the four Bernstein polynomials of degree three: . This method can be seen as a special case of the de Boor-Cox recursion formula [16,17] for B-splines, since B-splines are the proper generalization of Bézier curves [18].
The curve c(t) can be evaluated by multiplying the vector of Bernstein polynomials together with the vector of coefficients c.We introduce the following matrix notation for the set of Bernstein polynomials of degree n: Thus, by applying ( 6) to (5), the cubic Bézier curve in (4) can be expressed as The binomial in (4) can be expressed using barycentric coordinates simply by substituting 1 − t with v and t with w, so that (v, w) is a barycentric coordinate with respect to a line segment L = l 0 l 1 ⊂ R 1 , where v + w = 1.The first three matrices of (4) would then become Prior to introducing multivariate Bernstein polynomials, we provide a brief description of square matrices and some of their properties, which we shall use in some of the proofs and examples in the sequel of this paper.

The algebra of square matrices
N -th order square matrices form a vector space with respect to the ordinary summation of matrices and multiplication with a scalar [19, §93 (Example 1), §94].Its dimension is N 2 , and one basis in it is provided by the canonical matrices E ij = (δ ik δ ℓj ) N k,ℓ=1 , i, j = 1, . . ., N , where is the Kronecker delta.For the linear span of the so-defined canonical basis we shall use the notation E = E N ×N = E N ×N (F) where F denotes the field of scalars of the linear span and where, in our case, we shall always consider only the case F = R (real scalars).It is well-known, that when endowed with matrix multiplication, E N ×N becomes a non-commutative algebra [19, §93].Since in the sequel we shall not be using all the properties of E N ×N as an algebra (for example, we shall not be discussing inverse elements in E N ×N ), our choice here is to provide a self-consistent presentation of only those of the properties of the algebra E N ×N which are of relevance to this article through the next few lemmas.
Lemma 1.The dimension of the space E N ×N is determined by the largest dimension of the matrices involved in a given multiplication.For instance, let where A, B and C are matrices with dimensions K × L, L × M and K × M , respectively.Then N = max{K, L, M }, and we can, for instance, express the matrix A as where a kl are the entries in A, which we define to be zero if k > K or if l > L.
The result is a space E N ×N which contains A as element of the linear span of E ij for the first K values of i and the first L values of j, i.e., corresponding to a K × L-rectangular submatrix situated in the upper left corner of the N × N square matrix.Entries from B and C can be obtained analogously.
Formula (8) is a concise form of the respective formula in [19, §93 (item 1)] and a simplification of the respective formulae in [19, §94 (item 2)].This will prove essential in the proofs of our main results, because there (8) will be used iteratively.
For the particular case when C = B, D = A in formula (9) we obtain the following:

Multivariate Bernstein polynomials
The symmetric properties of the Bernstein polynomials suggest that barycentric coordinates are a natural choice to represent multivariate scenarios.We proceed by considering factorization of the Bernstein polynomials in two variables expressed in barycentric coordinates with respect to a triangle in R 2 .Then we shall generalize the resulting factor matrices to the multivariate setting, with a recursive definition based on a particular decomposition of the matrices into submatrices.

Bivariate Bernstein polynomials
Proceeding as in (1), we compute the trinomial expansion where i, j, k ≥ 0 and i + j + k = n.This leads to the Bernstein polynomials of degree n, where (u, v, w) is the barycentric coordinate of a point p with respect to a triangle A = (a 0 , a 1 , a 2 ) ⊂ R 2 , (i, j, k) ∈ {0, 1, . . ., n} 3 , and i + j + k = n.We note that (u, v, w) represents a local parameter with respect to A and p a global parameter, since p = Au = a 0 u + a 1 v + a 2 w, with u + v + w = 1.The recurrence relation for Bernstein polynomials associated with a triangle A ⊂ R 2 in homogeneous barycentric coordinates is defined (see e.g.[12,13]) as where we consider expressions with negative subscripts to be zero.It is well known [12] that the Bernstein polynomials form a basis for the space of polynomials of degree n.Thus, given any triangle A, every polynomial p of degree n has a unique Bernstein-Bézier representation where B n ijk are the Bernstein polynomials associated with A. In the sequel we shall assume a lexicographic order of the n+2 2 Bernstein polynomials, as provided in ( 13) and ( 14), such that B n rst > B n ijk when r > i, or if r = i, then s > j, or if r = i and s = j, then t > k.As an example, the order of B n ijk for i + j + k = n and n = 3 is the following: 003 .We use the recurrence relation in (12) to define the triangular (bivariate) Bernstein factor matrix in homogeneous barycentric coordinates, as follows: 2 band-limited matrix with three non-zero elements on each row.The columns of T 2,n (u, v, w) correspond to the n+2 2 vectors of terms of the recurrence relation in (12) in lexicographic order, such that the non-zero elements in every column are 1.positioned according to the lexicographic index numbers of the B n−1 ijk on the RHS, and 2. taking the values of their associated variables; u, v, or w.
It follows immediately from Theorem 5 that the first three triangular (bivariate) factor matrices in homogeneous barycentric coordinates are as follows: where the horizontal and vertical lines are used to annotate sub-matrices, which we shall describe later.We find it appropriate to include the following lemma, which states that the proposed factor matrices are a factorization of the set of Bernstein polynomials in three variables in barycentric coordinates.

Lemma 6. The Bernstein factor matrices {T
By investigating the matrices in ( 7) and ( 15) -( 17) we observe that a specific Bernstein factor matrix consists of four sub-matrices: The upper left sub-matrix is the factor matrix of the same dimension but of one degree lower.The lower right sub-matrix is the factor matrix of one dimension lower but of the same degree.The upper right sub-matrix is a zero matrix.In the lower left submatrix the only non-zero elements are on the diagonal from one of the entries on the top to the lower right element.We shall use the term right diagonal, since it can be seen as a main diagonal which is shifted as far as possible to the right in a square matrix.
It turns out that this relation holds for arbitrary degree of the factor matrices.We, therefore, summarize the findings in the following lemma.
Lemma 7. The factor matrix T 2,n (u, v, w), for degree n > 1, consists of four sub-matrices, such that where T diag 2,n (u) is a matrix where the only non-zero elements are on the right diagonal and equal to u. [13] as follows:

The
where i is the barycentric coordinate of a point p with respect to A, Bernstein polynomials of degree n.They form a basis for all d-variate polynomials of total degree ≤ n, they form a partition of unity and are positive for u > 0, which is one reason to consider Bernstein polynomials over the simplex A.
The following recurrence relation holds for the Bernstein polynomials in (19): where e 0 , . . ., e d represents the columns of the (d + 1) × (d + 1) identity matrix, B 0 0 = 1, B 0 i = 0 if i has negative components, and |i − e j | = n − 1.We shall assume a lexicographic order of the multivariate Bernstein polynomials, similar to the R 2 case, based on the index numbers i 0 , . . ., i d .The Bernstein factor matrices for arbitrary number of variables are defined as follows: i−e j on the RHS, and 2. taking the values of their associated variables; We recall Theorem 7, which can be extended to the multivariate case.An outline of a proof is to use the recurrence relation provided in (20) and Theorem 8, which is similar to the proof of Theorem 7. We use this to propose an alternative equivalent definition of the multivariate Bernstein factor matrices of arbitrary degree, based on recursion of the sub-matrices, as summarized in the following theorem, which is a main result: where d ≥ 0 is the dimension, n ≥ 1 is the degree, T 0,q = (u d ) for q ≤ n, T p,1 = (u d−p , . . ., u d ) for p ≤ d, and T diag q,n (u d−q ) is a matrix where the only non-zero elements are on the right diagonal and equal to u d−q .
In the following we will in some cases omit specifying the parameters of the matrix T and its sub-matrices for simplification.We note that the submatrices in (21) are such that matrix where all the entries are zero.
Lemma 10.The entries in every row in T d,n (u) are either 0, u 0 , . . ., or u d , such that each of the elements u 0 , . . ., u d occurs once, in increasing order, with optional zeros in-between.
For the sake of clarity, we formalize that the proposed factor matrices are a factorization of the set of Bernstein polynomials in d variables, similar to Theorem 6 for the R 2 case, in the following lemma.
Lemma 11.The Bernstein factor matrices {T d,i } n i=1 factor the Bernstein polynomials B n d : We include the following lemma concerning a symmetry property of the Bernstein factor matrices: Lemma 12.Given the barycentric coordinates u = (u 0 , . . ., u d ) and z = (z 0 , . . ., z d ) with respect to a d-dimensional simplex A. Then
From the chain rule, using also the properties (24) and (25) of the barycentric coordinates and Theorem 13, we have that Partial derivatives of a Bernstein polynomial with respect to each of its parameters can be obtained as where i is a multiindex, |i| = n, |i − e j | = n − 1, and B j = 0 if j has a negative component.This gives rise to the following lemma, which can be found in a trivariate setting in [12]: Recall from Theorem 11 that This product of matrices can be differentiated by the same formulae as if the factors were numbers.If T(u) is a matrix whose entries are functions of u, then the derivative in the direction of v, D v T of T, is defined as the matrix obtained by differentiating each entry of T with respect to v. The entries are obtained by differentiation of linear convex combinations, which yields constants independent of u.
We formalize the definition of the derivative of the Bernstein factor matrix T d,n (u):

band-limited matrix with d + 1 nonzero constant elements on each row (independent of u).
We note here that the submatrix D v T diag d,n is zero if v 0 = 0, since the only non-zero elements in T diag d,n are equal to v 0 .Furthermore, since The remainder of this section is partially based on an analogous treatment of univariate B-splines on matrix form in [11], adjusted here to the setting of multivariate Bernstein polynomials.The following well-known rule for differentiating a product of two matrices will be used in order to find the derivative of the factorization.Lemma 16.Let A and B be two matrices with compatible dimensions for the matrix product AB and with entries that are functions of (u 0 , . . ., u d ).

Then D(AB) = (DA)B + A(DB).
By applying the rule from Theorem 16 to the product (29), we get Formula (30) can be simplified by applying the following lemma: Lemma 17.The matrices T d,k−1 and T d,k satisfy the relation for k ≥ 2, any u = (u 0 , . . ., u d ), u k ∈ R, k = 0, 1, . . ., d, and any given direction v = (v 0 , . . ., v d ) with d k=0 v 2 k > 0. The differentiation operator D v in (30) can be shifted between the matrices by using (31), and by making use of Theorem 15 we obtain We are now ready to provide one of the main results, a theorem which states that the r-th directional derivative of the set of Bernstein polynomials B n d can be obtained by differentiating r of the factor matrices.
Theorem 18.Given a set of Bernstein polynomials B n d (u) and the directions v 1 , . . ., v r , for 1 ≤ r ≤ n.Then We recall from (35) that any d-variate polynomial of degree n has a Bernstein-Bézier representation, Next, we obtain the following via applying Theorem 18: Corollary 19.Given a polynomial s(u) on Bernstein-Bézier form and the directions v 1 , . . ., v r .Then

Relevance to the de Casteljau algorithm
The Bernstein polynomials form a basis; thus, every d-variate polynomial s(u) has a Bernstein-Bézier representation with respect to a reference simplex A: which can be evaluated with de Casteljau's algorithm by applying the recurrence relation in (20).Similar to the case of curves, we obtain and, after n − 2 steps, The directional derivative of a polynomial in Bernstein-Bézier form is formulated, based on a description in [13], as follows.
Lemma 20.The directional derivative of a d-variate polynomial s(u) on Bernstein-Bézier form, at the point p in the direction of v, is given by

are the intermediate points resulting from the first step of the de Casteljau algorithm based on the barycentric coordinates of v.
We observe that the coefficients of the first derivative of a polynomial on Bernstein-Bézier form are as provided by Theorem 20 and verify that D v s at the point p with barycentric coordinates u can be evaluated using the de Casteljau algorithm by applying one step of the algorithm using v and then n − 1 steps using u.
We abbreviate the intermediate points d j , using the notation from [13], by d j = ∆ v c j .Higher-order derivatives are obtained in the same way, e.g., an r- where |j| = n − r.The following result is obtained by applying Theorem 20 repeatedly.
Lemma 21.Given the directions v 1 , . . ., v r , for 1 ≤ r ≤ n.Then, where d (r) j are the intermediate points obtained after performing r steps of the de Casteljau algorithm using v 1 , . . ., v r in this order.
One consequence of Theorem 21 is that D v 1 • • • D vr s can be evaluated at the point p with barycentric coordinates u by first applying r steps of the de Casteljau algorithm using v 1 • • • v r in this order, and then by using u in the following n − r steps.
The forward difference operator ∆ v commutes with the steps of the de Casteljau algorithm [13] since the computation of affine combinations of affine combinations is commutative: Thus, an r-th directional derivative can also be obtained by first computing n − r steps of the de Casteljau algorithm followed by r differentiation steps.
We obtain the following result; by using Theorem 9, Theorem 11 and (20), formula (35) can be expressed in matrix form: can be expressed by using Bernstein factor matrices as: Computing the RHS in (36) from right to left corresponds to the steps of the de Casteljau algorithm.
We note that computing only the matrices , from left to right, yields a vector containing the d+n d Bernstein polynomials of degree n in d + 1 variables with respect to the simplex A ⊂ R d : One important new observation here is that, in a manner similar to the case of curves, this method can be seen to correspond to a special case of the de Boor-Cox recursion formula for B-splines in a multivariate setting.We conclude this section by recalling Theorem 19 and Theorem 21 to make another important new observation: namely Theorem 22 is applicable also for computations of directional derivatives.

The de Casteljau algorithm as an iterative procedure
The de Casteljau algorithm can be described as an iterative process by using projection and a linear step.Let us recall the expression in ( 5) for a cubic parametric curve on Bézier form by factorization using matrices, where in general for degree n we obtain: T is a vector containing the coefficients, or Bézier points, for the curve.Then we extend the matrices T k (t) for 1 ≤ k ≤ n, as described in Theorem 1, to obtain a product of n square matrices of size (n + 1) 2 , multiplied with v 0 .After performing one step of the de Casteljau algorithm, which corresponds to multiplying together the two rightmost factors in the product, we find where We investigate the difference between the two steps by considering the difference between the vectors of coefficients: Recall that the intermediate points It follows that the difference vector d 1 can be decomposed into an orthogonal projection from R n+1 onto R n , and a linear step: where .
In general, we find that the k-th step of the iterative procedure, which yields where , and The process terminates when Figure 1 shows how the vectors of coefficients are repeatedly projected onto a subspace and translated by a linear step depending on t for the case of a parametric curve with n = 2.The number of coefficients, or points, is reduced by one for each projection until one single point remains.
In this example, Figure 1: An illustration of the steps of the de Casteljau algorithm by iteratively performing a orthonormal projection followed by a linear step, for a parametric curve, with n = 2.The initial coefficients, or points, are contained in the vector v 0 ∈ R 3 .b 1 is the orthogonal projection of v 0 onto R 2 ⊂ R 3 , v 1 constitutes the vector of intermediate points after the first step of the algorithm, and b 2 is the orthogonal projection of v 1 onto R 1 ⊂ R 2 .The final step yields the vector v 2 ∈ R 1 .It contains one point which constitutes the value of the curve at the parameter value t.
We note that the space of square matrices used in the considered example is a complete inner product space (i.e., a Hilbert space) with the Euclidean norm || • || ℓ 2 .The method can be generalized to uniformly convex Banach spaces with ℓ p -norm, with 1 < p < ∞, since there will still exist a unique best approximation in such cases [3].However, that will most likely yield non-polynomial special functions, in general, but they may share some properties with the polynomial case, where p = 2.

Representing a parametric surface
In this example we show how directional derivatives of a multivariate cubic Bézier representation of a parametric surface s(u) can be expressed in matrix form by factorization.First of all, we consider the surface where B 3 d (u) are the d-variate Bernstein polynomials of degree 3, and c are its Bézier coefficients.The derivative of s(u) at the point p in the direction of v 0 , can be expressed as a factorization of the Bernstein polynomials by matrices: and by applying Theorem 17 we obtain We proceed by applying one more differentiation step, this time in the direction of v 1 : and by applying Theorem 17 again and re-ordering the terms we obtain With the use of Theorem 15 we write

Commutativity of multiplication
We shall now look at how a special property of the multivariate Bernstein factor matrix T d,n (u) can be used to provide an alternative proof of Theorem 12 by using induction.
Let us recall from (23) that: for d ≥ 0 and n ≥ 1.
Proof.The proof is based on induction, on the variables d and n, of (23) of the fully-ordered (well-ordered) set of all (d, n).We equip N × N with the lexicographic order relation ≤ defined by (d, n) ≤ (s, t) if ( (d < s) or (d = s and n ≤ t) ).This is a full-order relation on N × N, which means that we may use the induction theorem.Details on full-ordered sets and induction can be found in [20].We proceed as follows to prove a property P for all (d, n) in N × N: 1. Show that P holds for (d, n) = (0, 1).
2. Suppose that P holds for all elements of N × N which are less than some arbitrary (d, n).Show that P holds for (d, n).
3. Conclude that P holds for all (d, n) in N × N by the induction theorem for the well-ordered sets.
1. Using d = 0, n = 1 in (23) gives the following relation: which yields and it follows that both the LHS and the RHS of (38) compute to uz, since the product of two numbers commutes.
2. We want to prove P (d, n) using any (p, q) < (d, n).When applying (21) and its derivative from the definitions above, the left-hand side of (23) becomes , which can be written as .
A similar computation for the RHS of (23) yields .
It follows that both the upper left and the lower right sub-matrices of (39) are equivalent to the corresponding sub-matrices of (40) by definition of the induction step.Next, we need to check that the lower left sub-matrices of (39) and ( 40) are equivalent: (41) The non-zero elements of T diag d,n (u) and T diag d,n (z) are equal to u 0 and v 0 , respectively.Thus, the LHS of (41) becomes Similarly, we obtain for the RHS of (41): Re-ordering the terms and inserting them into (41) yields and it follows that LHS = RHS in (41), since T diag d,n (u) contains u 0 s and T diag d,n (z) contains v 0 s, and since the product of two real scalars is commutative.

Proofs
This section contains proofs of some of the lemmas and theorems presented earlier in this article.

Proof of Theorem 2
Proof.We consider the product where the entries in E ij are and the elements of E kℓ are Finally, we obtain and the result follows.

Proof of Theorem 3
Proof.
where the RHS computes to By using the multiplier in ( 8) for E ij E kℓ and E mn E pq and re-ordering the sums we obtain where setting j = k = µ and n = p = ν yields vectors results in a matrix where the upper part is a zero matrix and the nonzero elements in the lower part are either v or w, since i = 0, and they follow the layout of T 1,n (v, w). in the lexicographic order of the values of j, by taking i − e i .Since the values of i are ordered, the values of j k will appear in order such that the associated values of u i are in the order u 0 , . . ., u d .Since j k corresponds to a row in T d,n (u), the result follows.

Proof of Theorem 11
Proof.We use use induction in the variable n of (22) for arbitrary degree d. 1.By setting n = 2 in the LHS of (22) and applying Theorem 9 we obtain which computes to where the RHS can be re-arranged to which clearly is equal to zero since scalar multiplication is commutative.But then (44) is equal to zero, since changing the order of the sums commutes.Using this with (19) yields By differentiating with respect to t and evaluating at t = 0 we obtain where the RHS computes to (28).

Proof of Theorem 15
Proof.By applying the chain rule we obtain The RHS corresponds to setting T d,n (v), and the result follows.

Proof of Theorem 17
Proof.The result follows by differentiating both sides of (23) with respect to z in the direction of v. Similarly, for the r-th derivative we find Since, in addition, B n−r d (u) = T d,1 (u), • • • , T d,n−r (u) holds, and since it does not matter which of the n matrices T d,k is being differentiated (it only matters that we differentiate r of them), due to the symmetry property of (31), we conclude that (33) is true.

Proof of Theorem 20
Proof.Since l(t) is a line in the domain, it can be seen to correspond to a curve s • l(t) on a given parametric object, Then the derivative of s(u) at the point p in the direction of v is given by its derivative with respect to t at t = 0+:

Concluding remarks
We have proposed a matrix representation of multivariate Bernstein polynomials of arbitrary dimension by factorization.The factor matrix is defined recursively via a particular decomposition of the matrix into sub-matrices.The multiplication of the factor matrices from right to left was shown to correspond to the steps of the de Casteljau algorithm.A similar relation to the de Boor-Cox recursion formula, specialized for Bézier representation, can be found by computing the matrices from left to right, which yields a vector containing the Bernstein polynomials for the given degree in the specified number of variables.These polynomials can then be multiplied together with the associated vector of coefficients in order to evaluate the parametric object in question.
The matrix representation illustrates clearly one of the differences between the two methods; in the case of de Casteljau, every step yields intermediate points, whereas the intermediate values with the de Boor-Cox method are numbers.The total number of arithmetic operations when multiplying the matrices from right to left (de Casteljau) are higher than in the case of multiplication from left to right (de Boor-Cox), provided that the coefficients are points in R d , d > 1.

Theorem 1
allows us to simplify the considerations for multiplication of several algebrae [19, §94 (item 2)], to a consideration within one single algebra E N ×N of sufficiently high order N [19, §93 (item 1)].

Definition 8 .
The Bernstein factor matrix T d,n (u 0 , . . ., u d ) is a n+d−1 d × n+d d band-limited matrix with d non-zero elements on each row.The columns of T d,n (u 0 , . . ., u d ) correspond to the n+d d vectors of terms in the recurrence relation (20) in lexicographic order, such that the non-zero elements in every column are 1.positioned according to the lexicographic index numbers of the B n−1

Theorem 9 .
The Bernstein factor matrix T d,n (u) is a n+d−1 d × n+d d band-limited matrix with d + 1 nonzero elements on each row.The matrix is defined recursively as follows: with |i| = n, . . ., 0, are intermediate coefficients.These intermediate coefficients are polynomials of degree k.

5. 6 .
Proof of Theorem 10 Proof.According to Theorem 8, the columns in T d,n (u) are arranged such that 1.The order of the columns are based on the lexicographic order of the permutations of i for |i| = n, and 2. u i is placed on the (i − e i )-th row within the column, if 0 ≤ (i − e i ) < d+n−1 d .The possible results of computing i − e i , for i = 0, . . ., d leads to the n+d−1 d permutations of j = (j 0 , . . ., j d ), where |j| = n − 1.There are d + 1 possible ways to obtain the position j k , for k = 1, . . ., n+d−1 d Finally, since two matrices A and B commute if AB−BA = 0 (see Theorem 4), it follows thatT d,k−1 (z)T d,k (u) − T d,k−1 (u)T d,k (z) = 0.5.9.Proof of Theorem 14Proof.The barycentric coordinates of the point p + tv are (u 0 + tv 0 , . . ., u d + tv d ).