Noname manuscript No. (will be inserted by theeditor) Multi-Trial Guruswami–Sudan Decoding for Generalised Reed–Solomon Codes Johan S. R. Nielsen · Alexander Zeh February1,2013 3 1 0 Abstract An iterated refinement procedure for the Guruswami–Sudan list decoding 2 algorithm for Generalised Reed–Solomon codes based on Alekhnovich’s module min- n imisation is proposed. The method is parametrisable and allows variants of the usual a list decoding approach. In particular, finding the list of closest codewords within an J intermediate radius can be performed with improved average-case complexity while 1 retaining the worst-case complexity. 3 Keywords Guruswami–Sudan · List Decoding · Reed–Solomon Codes · Multi-Trial ] T I . s 1 Introduction c [ Since the discovery of a polynomial-time hard-decision list decoder for Generalised 2 Reed–Solomon (GRS) codes by Guruswami and Sudan (GS) [12,7] in the late 1990s, v much work has been done to speed up the two main parts of the algorithm: inter- 6 polation and root-finding. Notably, for interpolation Beelen and Brander [2] mixed 3 the module reduction approach byLee and O’Sullivan [8] with theparametrisation of 2 6 Zeh et al. [13],and employed thefast module reductionalgorithm byAlekhnovich[1]. . Bernstein [4] pointed out that a slightly faster variant can be achieved by using the 1 reduction algorithm byGiorgi et al. [6]. 0 3 Fortheroot-findingstep,onecanemploythemethodofRothandRuckenstein[11] 1 in a divide-and-conquer fashion, as described by Alekhnovich [1]. This step then be- : comes an order of magnitude faster than interpolation, leaving the latter as the main v target for furtheroptimisations. i X Foragivencode,theGSalgorithm hastwoparameters,bothpositiveintegers:the r interpolationmultiplicitysandthelistsizeℓ.Togetherwiththecodeparametersthey a determine the decoding radius τ. To achieve a higher decoding radii for some given J.S.R.NielseniswiththeInstitute ofMathematics, TechnicalUniversityofDenmark E-mail:[email protected] A.ZehiswiththeInstituteofCommunicationsEngineering,UniversityofUlm,Germanyand ResearchCenter INRIASaclay-ˆIle-de-France,E´colePolytechnique, France E-mail:[email protected] 2 GRS code, one needs higher s and ℓ, and the value of these strongly influence the runningtime of the algorithm. In this work, we present a novel iterative method: we first solve the interpolation problem for s = ℓ = 1 and then iteratively refine this solution for increasing s and ℓ. In each step of our algorithm, we obtain a valid solution to theinterpolation problem fortheseintermediateparameters.ThemethodbuildsuponthatofBeelen–Brander[2] and has the same asymptotic complexity. Themethod therefore allows afast multi-trial list decoderwhenouraim is just to findthelistofcodewordswithminimaldistancetothereceivedword.Atanytimedur- ing the refinement process, we will have an interpolation polynomial for intermediate parameters sˆ≤s,ℓˆ≤ℓ yieldinganintermediatedecodingradiusτˆ≤τ.Ifweperform theroot-findingstepoftheGSalgorithm onthis,allcodewords withdistanceatmost τˆ from the received are returned; if there are any such words, we break computation and return those; otherwise we continue the refinement. We can choose any number of these trials, e.g. for each possible intermediate decoding radius between half the minimum distance and thetarget τ. Sincetheroot-findingstepofGSischeaperthantheinterpolationstep,thismulti- trialdecoderwillhavethesameasymptoticworst-casecomplexityastheusualGSusing theBeelen–Branderinterpolation;however,theaverage-casecomplexityisbettersince fewer errors are more probable. Thiscontributionisstructuredasfollows.Inthenextsectionwegivenecessarypre- liminariesandstatetheGSinterpolationproblemfordecodingGRScodes.InSection3 we give a definition and properties of minimal matrices. Alekhnovich’s algorithm can bring matrices to this form, and we give a more fine-grained bound on its asymptotic complexity.Ournew iterative procedure is explained in detail in Section 4. 2 Preliminaries 2.1 Notation Let Fq bethefinitefieldof orderq and let Fq[X]bethepolynomial ringoverFq with indeterminate X. Let Fq[X,Y] denote the polynomial ring in the variables X and Y and let wdeg XiYj ,ui+vj be the(u,v)-weighted degree of XiYj. u,v A vector of length n is denoted by v = (v0,...,vn−1). If v is a vector over Fq[X], let degv , maxi{degvi(X)}. We introduce the leading position as LP(v) = maxi{i|degvi(X) = degv} and the leading term LT(v) = vLP(v) is the term at this position. An m × n matrix is denoted by V = kvi,jkmi=−0,1j,=n−01. The rows of such a matrix will be denoted by lower-case letters, e.g. v0,...,vm−1. Furthermore, let degV =Pmi=−01degvi. Modules are denoted bycapital letters such as M. 2.2 Interpolation-Based Decoding of GRS Codes Letα0,...,αn−1bennonzerodistinctelementsofFq withn<qandletw0,...,wn−1 ben(notnecessarilydistinct)nonzeroelementsofFq.AGRScodeGRS(n,k)oflength n and dimension k over Fq is given by GRS(n,k), (w0f(α0),...,wn−1f(αn−1)):f(X)∈Fq[X], degf(X)<k . (1) (cid:8) (cid:9) 3 GRS codes are Maximum Distance Separable (MDS) codes, i.e., their minimum Ham- mingdistanceisd=n−k+1.Weshortlyexplaintheinterpolation problemofGS[7, 12] for decoding GRS codes in the following. Theorem 1 (Guruswami–Sudan for GRS Codes [7,12]) Let c ∈ GRS(n,k) be a codeword and f(X) the corresponding information polynomial as defined in (1). Let ′ r=(r0,...,rn−1)=c+ebeareceivedwordwhereweight(e)≤τ.Letri denoteri/wi. Let Q(X,Y) ∈ Fq[X,Y] be a nonzero polynomial that passes through the n points ′ ′ (α0,r0), ...,(αn−1,rn−1) with multiplicity s ≥ 1, has Y-degree at most ℓ, and wdeg1,k−1Q(X,Y)<s(n−τ). Then (Y −f(X))|Q(X,Y). One can easily show that a polynomial Q(X,Y) that fulfils the above conditions can be constructed wheneverE(s,ℓ,τ)>0, where E(s,ℓ,τ),(ℓ+1)s(n−τ)− ℓ+1 (k−1)− s+1 n (2) (cid:0) 2 (cid:1) (cid:0) 2 (cid:1) isthedifferencebetweenthemaximalnumberofcoefficientsofQ(X,Y),andthenum- ber of homogeneous linear equations on Q(X,Y) specified by the interpolation con- straint. This determines themaximal numberof correctable errors, and one can show thatsatisfactorysandℓcanalwaysbechosenwheneverτ <n− n(k−1)(forn→∞ p see e.g. [7]). Definition 2 (Permissible Triples) An integer triple (s,ℓ,τ)∈(Z+)3 is permissi- ble if E(s,ℓ,τ)>0. We define also the decoding radius-function τ(s,ℓ) as the greatest integer such that (s,ℓ,τ(s,ℓ)) is permissible. It is easy to show that E(s,ℓ,τ) > 0 for s > ℓ implies τ < ⌊n−k⌋, which is half 2 the minimum distance. Therefore, it never makes sense to consider s > ℓ, and in the remainderwewill always assumes≤ℓ.Furthermore,wewill also assume s,ℓ∈O(n2) since this e.g. holds for any τ for theclosed-form expressions in [7]. 2.3 Module Reformulation of Guruswami–Sudan LetMs,ℓ ⊂Fq[X,Y]denotethespaceofallbivariatepolynomialspassingthroughthe ′ ′ points (α0,r0),...,(αn−1,rn−1) with multiplicity s and with Y-degreeat most ℓ.We are searching for an element of Ms,ℓ with low (1,k−1)-weighted degree. Following the ideas of Lee and O’Sullivan [8], we can first remark that Ms,ℓ is an Fq[X] module. Second, we can give an explicit basis for Ms,ℓ. Define first two pthorloyungohmtiahlespGo(iXnt)s=(αQi,rin=i′−)01f(oXri−=αi0),.a.s.w,nel−la1s.RD(eXno)taesbtyheQL[ta](gXra)ngtheepYolty-ncoomeffiiacliegnotinogf Q(X,Y) when Q is regarded overFq[X][Y]. Lemma 3 Let Q(X,Y)∈Ms,ℓ. Then G(X)s−t|Q[t](X) for t<s. ′ Proof Q(X,Y)interpolatesthenpoints(αi,ri)withmultiplicitys,soforanyi,Q(X+ sα.i,MYu+ltirpi′l)y=ingPotju=t0tQhe[j]((YX++rαj′j))j(-Yter+mrsj′,)jQh[ta](sXno+mαojn)Yomtiwalisllobfetotthael doenglryeetelremss twhiathn Y-degree t. Therefore Q[t](X+αj) can have no monomials of degree less than s−t, which implies (X−αi)|Q[t](X).As this holds for any i, we provedthe lemma. ⊓⊔ 4 Theorem 4 ThemoduleMs,ℓ isgeneratedasanFq[X]-modulebytheℓ+1polynomials P(i)(X,Y)∈Fq[X,Y] given by P(t)(X,Y)=G(X)s−t(Y −R(X))t, for 0≤t<s, P(t)(X,Y)=Yt−s(Y −R(X))s, for s≤t≤ℓ. Proof It is easy to see that each P(t)(X,Y)∈Ms,ℓ since both G(X) and (Y −R(X)) ′ gothroughthenpoints(αi,ri)withmultiplicityone,andthatG(X)and(Y −R(X)) divideP(t)(X,Y) with total power s for each t. To see that any element of Ms,ℓ can be written as an Fq[X]-combination of the P(t)(X,Y),letQ(X,Y)besomeelementofMs,ℓ.ThenthepolynomialQ(ℓ−1)(X,Y)= Q(X,Y) − Q P(ℓ)(X,Y) has Y-degree at most ℓ − 1. Since both Q(X,Y) and [ℓ] P(ℓ)(X,Y) are in Ms,ℓ, so must Q(ℓ−1)(X,Y) be in Ms,ℓ. Since P(t)(X,Y) has Y- (t) degreetandP (X)=1fort=ℓ,ℓ−1,...,s,wecancontinuereducingthiswayuntil [t] we reach a Q(s−1)(X,Y)∈Ms,ℓ with Y-degree at most s−1. From then on, we have P(t)(X)=G(X)s−t,butbyLemma3,wemustalsohaveG(X)|Q(s−1)(X),sowecan [t] [s−1] also reduce by P(s−1)(X,Y). This can be continued with the remaining P(t)(X,Y), eventually reducing theremainder to 0. ⊓⊔ We can represent the basis of Ms,ℓ by the (ℓ + 1) × (ℓ + 1) matrix As,ℓ = kP[(ji])(X,Y)kℓi=,ℓ0,j=0 over Fq[X]. Any Fq[X]-linear combination of rows of As,ℓ thus corresponds to an element in Ms,ℓ by its tth term being the Fq[X]-coefficient to Yt. All other bases of Ms,ℓ can be similarly represented by matrices, and these will be unimodular equivalent to As,ℓ, i.e., they can be obtained by multiplying As,ℓ on the left with an invertiblematrix over Fq[X]. Extending the work of Lee and O’Sullivan [8], Beelen and Brander [2] gave a fast algorithmforcomputingasatisfactory Q(X,Y):startwithAs,ℓ asabasisofMs,ℓ and compute a different, “minimal” basis of Ms,ℓ where an element of minimal (1,k−1)- weighted degree appears directly.1 In the following section, we give further details on how to compute such a basis, but our ultimate aims in Section 4 are different: we will use a minimal basis of Ms,ℓ to efficiently compute one for M for sˆ≥ s and ℓˆ> ℓ. This will allow an iterative sˆ,ℓˆ refinement for increasing s and ℓ, where after each step we have such a minimal basis for Ms,ℓ. Wethen exploit thisadded flexibility in our multi-trial algorithm. 3 Module Minimisation Given a basis of Ms,ℓ, e.g. As,ℓ, the module minimisation here refers to the process of obtaining a new basis, which is the smallest among all bases of Ms,ℓ in a precise sense. Wewill definethis and connect various known properties of such matrices, and use this to more precisely bound the asymptotic complexity with which they can be computed by Alekhnovich’salgorithm. Definition 5 (Weak Popov Form [10]) A matrix V over Fq[X] is in weak Popov form if an only if the leading position of each row is different. 1 Actually,inboth[8,2],aslightvariantofAs,ℓ isused,butthedifferenceisnon-essential. 5 We are essentially interested in short vectors in a module, and the following lemma showsthatthesimpleconceptofweakPopovformwillprovidethis.Itisaparaphrasing of [1, Proposition 2.3] and we omit theproof. Lemma 6 (Minimal Degree) If a square matrix V over Fq[X] is in weak Popov form, then one of its rows has minimal degree of all vectors in the row space of V. Denote nowby Wℓ thediagonal (ℓ+1)×(ℓ+1) matrix overFq[X]: Wℓ ,diag(cid:16)1,Xk−1,...,Xℓ(k−1)(cid:17). (3) Since we seek an element of minimal (1,k −1)-weighted degree, we also need the following corollary. Corollary 7 (Minimal Weighted Degree) Let B ∈ Fq[X](ℓ+1)×(ℓ+1) be the ma- trix representation of a basis of Ms,ℓ. If BWℓ is in weak Popov form, then one of the rows of B corresponds toa polynomialinMs,ℓ with minimal(1,k−1)-weighted degree. Proof Let B = BWℓ. Now, B will correspond to the basis of an Fq[X]-module M isomorphiceto Ms,ℓ, where aneelement Q(X,Y)∈Ms,ℓ is mapped to Q(X,Xk−1Y)f∈ M. By Lemma 6, the row of minimal degree in B will correspond to an element of M wfith minimal X-degree. Therefore, the same roew of B corresponds to an elementfof Ms,ℓ with minimal (1,k−1)-weighted degree. ⊓⊔ Weintroducewhatwillturnouttobeameasureofhowfaramatrixisfrombeing in weak Popov form. Definition 8 (Orthogonality Defect [9]) Let the orthogonality defect of a square matrix V over Fq[X] be defined as D(V),degV−degdetV. Lemma 9 If a square matrix V over Fq[X] is in weak Popov form then D(V)=0. Proof Let v0,...,vm−1 be the rows of V ∈ Fq[X]m×m and vi,0,...,vi,m−1 the el- m−1 ements of vi. In the alternating sum-expression for detV, the term Qi=0 LT(vi) will occur since the leading positions of vi are all different. Thus degdetV = m−1 Pi=0 degLT(vi) = degV unless leading term cancellation occurs in the determi- nant expression. However, no other term in the determinant has this degree: regard m−1 some (unsigned) term in detV, say t = Qi=0 vi,σ(i) for some permutation σ ∈ Sm. If not σ(i) = LP(vi) for all i, then there must be an i such that σ(i) > LP(vi) since Pjσ(j) is the same for all σ ∈ Sm. Thus, degvi,σ(i) < degvi,LP(vi). As none of the other terms in t can havegreater degree than their corresponding row’s leading term, we get degt < Pmi=−01degLT(vi). Thus, D(V) = 0. However, the above also proves thattheorthogonalitydefectisatleast0forany matrix.Sinceanymatrixunimodular equivalent toV has thesame determinant,V must therefore haveminimal row-degree among these matrices. ⊓⊔ Alekhnovich[1]gaveafastalgorithmfortransformingamatrixoverFq[X]toweak Popovform.Forthespecialcaseofsquarematrices,afinerdescriptionofitsasymptotic complexity can be reached in terms of the orthogonality defect, and this is essential for our decoder. 6 Lemma 10 (Alekhnovich’s Row-Reducing Algorithm) Alekhnovich’s algo- rithm inputs a matrix V ∈ Fq[X]m×m and outputs a unimodular equivalent matrix which is in weak Popov form. Let N be the greatest degree of a term in V. If N ∈O(D(V)) then the algorithm has asymptotic complexity: O m3D(V)log2D(V)loglogD(V) operations over Fq. (cid:0) (cid:1) Proof Thedescriptionofthealgorithm aswellasproofofitscorrectnesscanbefound in[1].Weonlyprovetheclaimonthecomplexity.ThemethodR(V,t)of[1]computes a unimodular matrix U such that deg(UV)≤degV−t or UV is in weak Popov form. According to [1, Lemma 2.10], the asymptotic complexity of this computation is in O(m3tlog2tloglogt). Dueto Lemma 9, we can set t=D(V) to be sure that UV is in weakPopovform.WhatremainsisjusttocomputetheproductUV.Dueto[1,Lemma 2.8],eachentryinU canberepresentedasp(X)Xd forsomed∈N0 andp(X)∈Fq[X] of degree at most 2t. If therefore N ∈ O(D(V)), the complexity of performing the matrix multiplication usingthe naivealgorithm is O(m3D(V)). ⊓⊔ 4 Multi-Trial List Decoding 4.1 Basic Idea Usingtheresultsoftheprecedingsection,weshowinSection 4.2thatgivenabasisof Ms,ℓ as a matrix Bs,ℓ in weak Popov form, then we can write down a matrix CsI,ℓ+1 which is a basis of Ms,ℓ+1 and whose orthogonality defect is much lower than that of As,ℓ+1. This means that reducing CsI,ℓ+1 to weak Popov form using Alekhnovich’s algorithmisfasterthanreducingAs,ℓ+1.Wecallthiskindofrefinementa“micro-step of type I”. In Section 4.3, we similarly give a way to refine a basis of Ms,ℓ to one of Ms+1,ℓ+1, and we call this a micro-step of typeII. IfwefirstcomputeabasisinweakPopovformofM1,1usingA1,1,wecanperforma sequenceofmicro-stepsoftypeIandIItocomputeabasisinweakPopovformofMs,ℓ for any s,ℓ with ℓ≥s. After any step, having some intermediate sˆ≤s, ℓˆ≤ℓ, we will thus havea basis of M in weak Popov form. By Corollary 7, we could extract from sˆ,ℓˆ B a Qˆ(X,Y)∈ M with minimal (1,k−1)-weighted degree. Since it must satisfy sˆ,ℓˆ sˆ,ℓˆ the interpolation conditions of Theorem 1, and since the weighted degree is minimal amongsuchpolynomials, itmustalsosatisfy thedegreeconstraintsforτˆ=τ(sˆ,ℓˆ).By thattheoremanycodeword withdistanceatmost τˆfrom rwouldthenberepresented by a root of Qˆ(X,Y). Algorithm 1 is a generalisation and formalisation of this method. For a given GRS(n,k) code, one chooses ultimate parameters (s,ℓ,τ) being a permissible triple with s≤ℓ. One also chooses a list of micro-steps and chooses after which micro-steps toattemptdecoding;thesechoicesarerepresentedbyalistofS1,S2andRootelements. This list must contain exactly s−ℓ S1-elementsof and s−1 S2-elements, as it begins by computing a basis for M1,1 and will end with a basis for Ms,ℓ. If there is a Root elementinthelist,thealgorithm findsallcodewordswithdistanceatmostτˆ=τ(sˆ,ℓˆ) from r; if this list is non-empty,thecomputation breaks and thelist is returned. The algorithm calls sub-functions which we explain informally: MicroStep1 and MicroStep2 will take sˆ,ℓˆand a basis in weak Popov form for M and return a basis sˆ,ℓˆ in weak Popov form for M respectively M ; more detailed descriptions for sˆ,ℓˆ+1 sˆ+1,ℓˆ+1 7 these are given in Subsections4.2 and 4.3. MinimalWeightedRow findsa polynomial of minimal (1,k−1)-weighted degree in M given a basis in weak Popov form (Corol- sˆ,ℓˆ lary7).Finally,RootFinding(Q,τ)returnsallY-rootsofQ(X,Y)ofdegreelessthank and whose corresponding codeword has distance at most τ from the received word r. Algorithm 1: Multi-Trial Guruswami–Sudan Decoding Input: A GRS(n,k) code and thereceived vector r=(r0,...,rn−1) A permissible triple (s,ℓ,τ) Alist Cwith elementsin {S1,S2,Root}with s−1instancesof S2,ℓ−s instances of S1 ′ Preprocessing: Calculate ri=ri/wi for all i=0,...,n−1 Construct A1,1, and computeB1,1 from A1,1W1 using Alekhnovich’s algorithm Initial parameters (sˆ,ℓˆ)←(1,1) 1 for each c in Cdo 2 if c=S1 then 3 B ←MicroStep1(sˆ,ℓˆ,B ) sˆ,ℓˆ+1 sˆ,ℓˆ 4 (sˆ,ℓˆ)←(sˆ,ℓˆ+1) 5 if c=S2 then 6 B ←MicroStep2(sˆ,ℓˆ,B ) sˆ+1,ℓˆ+1 sˆ,ℓˆ 7 (sˆ,ℓˆ)←(sˆ+1,ℓˆ+1) 8 if c=Root then 9 Q(X,Y)←MinimalWeightedRow(B ) sˆ,ℓˆ 10 if RootFinding(Q(X,Y),τ(sˆ,ℓˆ))6=∅ then 11 return this list Algorithm1hasalargeamountofflexibilityinthechoiceofthelistC,butsincewe canonlyperformmicro-stepsoftypeIandII,therearechoicesofsandℓwecannever reach, or some which we cannot reach if we first wish to reach an earlier s and ℓ. We canneverreachs>ℓ,butasmentionedinSection2,suchachoicenevermakessense. It also seems to be the case that succession of sensibly chosen parameters can always bereachedbymicro-stepsoftypeIandII.Thatis,ifwefirstwishtoattemptdecoding at some radius τ1 and thereafter continue to τ2 > τ1 in case of failure, the minimal possible s1,ℓ1 and s2,ℓ2 such that (s1,ℓ1,τ1) respectively (s2,ℓ2,τ2) are permissible will satisfy 0≤s2−s1 ≤ℓ2−ℓ1. However,we haveyet to formalise and provesuch a statement. Inthefollowingtwosubsectionsweexplainthedetailsofthemicro-steps.InSection 4.4, we discuss the complexity of themethod and how thechoice of Cinfluencethis. 4.2 Micro-Step TypeI: (s,ℓ)7→(s,ℓ+1) Lemma 11 If B(0)(X,Y),...,B(ℓ)(X,Y) is a basis of Ms,ℓ, then the following is a basis of Ms,ℓ+1: B(0)(X,Y), ... , B(ℓ)(X,Y),Yℓ−s+1(Y −R(X))s 8 Proof In the basis of Ms,ℓ+1 given in Theorem 4, the first ℓ+1 generators are the generatorsofMs,ℓ.ThusallofthesecanbedescribedbyanybasisofMs,ℓ+1.Thelast remaining generator is exactly Yℓ−s+1(Y −R(X))s. ⊓⊔ In particular, the above lemma holds for a basis of Ms,ℓ+1 in weak Popov form, represented by a matrix Bs,ℓ. The following matrix thusrepresents a basis of Ms,ℓ+1: CI = Bs,ℓ 0T. (4) s,ℓ+1 0...0 (−R)s s (−R)s−1 ...1 (cid:0)1(cid:1) Lemma 12 D(CsI,ℓ+1Wℓ+1)=s(degR−k+1)≤s(n−k). Proof We calculate the two quantities det(CsI,ℓ+1Wℓ+1) and deg(CsI,ℓ+1Wℓ+1). It is easy to see that det(CsI,ℓ+1Wℓ+1)=detBs,ℓdetWℓ+1 =detBs,ℓdetWℓX(ℓ+1)(k−1). Fortherow-degree,itisclearlydeg(Bs,ℓWℓ)plustherow-degreeofthelastrow.Ifand only if the received word is not a codeword then degR≥k, then the leading term of thelast row must be (−R)sX(ℓ+1−s)(k−1).Thus, we get D(CsI,ℓ+1Wℓ+1)=(cid:0)deg(Bs,ℓWℓ)+sdegR+(ℓ+1−s)(k−1)(cid:1) − degdet(Bs,ℓWℓ)+(ℓ+1)(k−1) (cid:0) (cid:1) =s(degR−k+1), where thelast step follows from Lemma 9 as Bs,ℓWℓ is in weak Popov form. ⊓⊔ Corollary 13 The complexity of MicroStep1(s,ℓ,Bs,ℓ) is O(ℓ3snlog2nloglogn). Proof Follows by Lemma 10. Sinces∈O(n2) we can leave out thes in log-terms. ⊓⊔ 4.3 Micro-Step TypeII: (s,ℓ)7→(s+1,ℓ+1) Lemma 14 If B(0)(X,Y),...,B(ℓ)(X,Y) is a basis of Ms,ℓ, then the following is a basis of Ms+1,ℓ+1: Gs+1(X), B(0)(X,Y)(Y −R(X)), ... , B(ℓ)(X,Y)(Y −R(X)). Proof Denote by Ps(,0ℓ)(X,Y),...,Ps(,ℓℓ)(X,Y) the basis of Ms,ℓ as given in Theorem 4, (0) (ℓ+1) andbyPs+1,ℓ+1(X,Y),...,Ps+1,ℓ+1(X,Y) thebasis ofMs+1,ℓ+1.Thenobservethat for t > 0, we have P(t) = P(t−1)(Y −R(X)). Since the B(i)(X,Y) form a ba- s+1,ℓ+1 s,ℓ sis of Ms,ℓ, each Ps(,tℓ) is expressible as an Fq[X]-combination of these, and thus for t>0, Ps(+t)1,ℓ+1 is expressible as an Fq[X]-combination of the B(i)(X,Y)(Y −R(X)). Remaining is then only P(0) (X,Y)=Gs+1(X). ⊓⊔ s+1,ℓ+1 9 As before, we can use the above with the basis Bs,ℓ of Ms,ℓ in weak Popov form, found in theprevious iteration of our algorithm. Remembering that multiplying byY translatestoshiftingonecolumntotherightinthematrixrepresentation,thefollowing matrix thusrepresentsa basis of Ms+1,ℓ+1: Gs+1 0 0 0 0 0 CII = + −R· . (5) s+1,ℓ+1 0T 0 0T Bs,ℓ Bs,ℓ 0T Lemma 15 D(CsII+1,ℓ+1Wℓ+1)=(ℓ+1)(degR−k+1)≤(ℓ+1)(n−k). Proof We compute deg(CsII+1,ℓ+1Wℓ+1) and degdet(CsII+1,ℓ+1Wℓ+1). For the former, ′ obviously the first row has degree (s+1)n. Let bi denote the ith row of Bs,ℓ and bi denote theith row of Bs,ℓWℓ. The (i+1)th row of CsII+1,ℓ+1Wℓ+1 has theform (0|bi)−R(bi |0) Wℓ+1=(0|b′i)Xk−1−R(b′i |0). (cid:2) (cid:3) If and only if the received word is not a codeword, then degR ≥ k. In this case, the leading term of Rb′ must have greater degree than any term in Xk−1b′. Thus the i i ′ degree of theabove row is degR+degb .Summing up we get i ℓ degCsII+1,ℓ+1=(s+1)n+XdegR+degb′i i=0 =(s+1)n+(ℓ+1)degR+deg(Bs,ℓWℓ). For thedeterminant, observethat det(CsII+1,ℓ+1Wℓ+1)=det(CsII+1,ℓ+1)det(Wℓ+1) =Gs+1detBdetWℓX(ℓ+1)(k−1), e whereB=Bs,ℓ−RhB`s,ℓ (cid:12) 0TiandB`s,ℓisallbutthezerothcolumnofBs,ℓ.Thismeans e (cid:12) B canbeobtainedbystartingfromBs,ℓ anditerativelyaddingthe(j+1)thcolumnof Bes,ℓ scaled byR(X) to thejth column, with j starting from 0 upto ℓ−1. Since each of these will add a scaled version of an existing column in the matrix, this does not change thedeterminant. Thus, detB=detBs,ℓ. But then detBdetWℓ =det(Bs,ℓWℓ) and so deg(detBdetWℓ) = deg(Bs,eℓWℓ) by Lemma 9 since Bse,ℓWℓ is in weak Popov form. Thuswe geet degdet(CsII+1,ℓ+1Wℓ+1)=(s+1)n+deg(Bs,ℓWℓ)+(ℓ+1)(k−1). The lemma follows from thedifference of the two calculated quantities. ⊓⊔ Corollary 16 The complexity of MicroStep2(s,ℓ,Bs,ℓ) is O(ℓ4nlog2nloglogn). 10 4.4 Complexity Analysis Using the estimates of the two preceding subsections, we can make a rather precise worst-caseasymptoticcomplexityanalysisofourmulti-trialdecoder.Theaveragerun- ning time will depend on the exact choice of C but we will see that the worst-case complexity will not. First, it is necessary to know the complexity of performing a root-findingattempt. Lemma 17 (Complexity of Root-Finding) Given a polynomial Q(X,Y) ∈ Fq[X][Y] of Y-degree at most ℓ and X-degree at most N, there exists an algorithm to find all Fq[X]-roots of complexity O ℓ2Nlog2NloglogN , assuming ℓ,q∈O(N). (cid:0) (cid:1) Proof Weemploy theRoth–Ruckenstein[11]root-finding algorithm together with the divide-and-conquerspeed-up by Alekhnovich [1]. The complexity analysis in [1] needs to beslightly improved to yield theabove, but see [3] for easy amendments. Theorem 18 (Complexity of Algorithm 1) For a given GRS(n,k) code, as well asagivenlistofstepsCforAlgorithm1withultimateparameters(s,ℓ,τ),thealgorithm has worst-case complexity O(ℓ4snlog2nloglogn), assuming q∈O(n). Proof The worst-case complexity corresponds to the case that we do not break early but run through the entire list C. Precomputing As,ℓ using Lagrangian interpolation canbeperformedinO(nlog2nloglogn),seee.g.[5,p.235], andreducingtoBs,ℓ isin thesame complexity by Lemma 10. Now, C must contain exactly ℓ−s S1-elements and s−1 S2-elements. The com- plexities given in Corollaries 13 and 16 for some intermediate sˆ,ℓˆcan be relaxed to s and ℓ. Performing O(ℓ) micro-steps of type I and O(s) of type II is therefore in O(ℓ4snlog2nloglogn). Itonly remains tocount theroot-findingsteps. Obviously,it nevermakessense to have two Root after each other in C, so after removing such possible duplicates, there can be at most ℓ elements Root. When we perform root-finding for intermediate sˆ,ℓˆ, wedosoonapolynomialinM ofminimalweighteddegree,andbythedefinitionof sˆ,ℓˆ M as well asTheorem 1,thisweighted degreewill beless thansˆ(n−τˆ)<sn.Thus sˆ,ℓˆ we can apply Lemma 17 with N =sn. ⊓⊔ Theworst-case complexityof ouralgorithm isequaltotheaverage-case complexity of the Beelen–Brander [2] list decoder. However, Theorem 18 shows that we can choose asmanyintermediatedecodingattemptsaswewouldlikewithoutchangingtheworst- case complexity. One could therefore choose to perform a decoding attempt just after computing B1,1 as well as every time the decoding radius has increased. The result would be a decoding algorithm finding all closest codewords within some ultimate radiusτ.Ifoneisworkinginadecodingmodelwheresuchalistsuffices,ouralgorithm willthushavemuchbetteraverage-casecomplexitysincefewererrorsoccurmuchmore frequently than many. 5 Conclusion An iterative interpolation procedure for list decoding GRS codes based on Alekhnovich’smodule minimisation was proposed and shown to havethesame worst- case complexity as Beelen and Brander’s [2]. We showed how the target module used