Counting points on elliptic curves

From HandWiki

An important aspect in the study of elliptic curves is devising effective ways of counting points on the curve. There have been several approaches to do so, and the algorithms devised have proved to be useful tools in the study of various fields such as number theory, and more recently in cryptography and Digital Signature Authentication (See elliptic curve cryptography and elliptic curve DSA). While in number theory they have important consequences in the solving of Diophantine equations, with respect to cryptography, they enable us to make effective use of the difficulty of the discrete logarithm problem (DLP) for the group [math]\displaystyle{ E(\mathbb{F}_q) }[/math], of elliptic curves over a finite field [math]\displaystyle{ \mathbb{F}_q }[/math], where q = pk and p is a prime. The DLP, as it has come to be known, is a widely used approach to public key cryptography, and the difficulty in solving this problem determines the level of security of the cryptosystem. This article covers algorithms to count points on elliptic curves over fields of large characteristic, in particular p > 3. For curves over fields of small characteristic more efficient algorithms based on p-adic methods exist.

Approaches to counting points on elliptic curves

There are several approaches to the problem. Beginning with the naive approach, we trace the developments up to Schoof's definitive work on the subject, while also listing the improvements to Schoof's algorithm made by Elkies (1990) and Atkin (1992).

Several algorithms make use of the fact that groups of the form [math]\displaystyle{ E(\mathbb{F}_q) }[/math] are subject to an important theorem due to Hasse, that bounds the number of points to be considered. Hasse's theorem states that if E is an elliptic curve over the finite field [math]\displaystyle{ \mathbb{F}_q }[/math], then the cardinality of [math]\displaystyle{ E(\mathbb{F}_q) }[/math] satisfies

[math]\displaystyle{ ||E(\mathbb{F}_q)| - (q+1)| \leq 2 \sqrt{q}. \, }[/math]

Naive approach

The naive approach to counting points, which is the least sophisticated, involves running through all the elements of the field [math]\displaystyle{ \mathbb{F}_q }[/math] and testing which ones satisfy the Weierstrass form of the elliptic curve

[math]\displaystyle{ y^2 = x^3 + Ax + B. \, }[/math]

Example

Let E be the curve y2 = x3 + x + 1 over [math]\displaystyle{ \mathbb{F}_5 }[/math]. To count points on E, we make a list of the possible values of x, then of the quadratic residues of x mod 5 (for lookup purpose only), then of x3 + x + 1 mod 5, then of y of x3 + x + 1 mod 5. This yields the points on E.

[math]\displaystyle{ x }[/math] [math]\displaystyle{ x^2 }[/math] [math]\displaystyle{ x^3 + x + 1 }[/math] [math]\displaystyle{ y }[/math] Points
[math]\displaystyle{ \quad 0 }[/math] [math]\displaystyle{ 0 }[/math] [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ 1, 4 }[/math] [math]\displaystyle{ (0, 1), (0, 4) }[/math]
[math]\displaystyle{ \quad 1 }[/math] [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ 3 }[/math] [math]\displaystyle{ - }[/math] [math]\displaystyle{ - }[/math]
[math]\displaystyle{ \quad 2 }[/math] [math]\displaystyle{ 4 }[/math] [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ 1, 4 }[/math] [math]\displaystyle{ (2, 1), (2, 4) }[/math]
[math]\displaystyle{ \quad 3 }[/math] [math]\displaystyle{ 4 }[/math] [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ 1, 4 }[/math] [math]\displaystyle{ (3, 1), (3, 4) }[/math]
[math]\displaystyle{ \quad 4 }[/math] [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ 4 }[/math] [math]\displaystyle{ 2, 3 }[/math] [math]\displaystyle{ (4, 2), (4, 3) }[/math]

E.g. the last row is computed as follows: If you insert [math]\displaystyle{ x = 4 }[/math] in the equation x3 + x + 1 mod 5 you get [math]\displaystyle{ 4 }[/math] as result (3rd column). This result can be achieved if [math]\displaystyle{ y = 2, 3 }[/math] (Quadratic residues can be looked up in the 2nd column). So the points for the last row are [math]\displaystyle{ (4, 2), (4, 3) }[/math].

Therefore, [math]\displaystyle{ E(\mathbb{F}_5) }[/math] has cardinality of 9: the 8 points listed before and the point at infinity.

This algorithm requires running time O(q), because all the values of [math]\displaystyle{ x \in \mathbb{F}_q }[/math] must be considered.

Baby-step giant-step

An improvement in running time is obtained using a different approach: we pick an element [math]\displaystyle{ P=(x,y) \in E(\mathbb{F}_q) }[/math] by selecting random values of [math]\displaystyle{ x }[/math] until [math]\displaystyle{ x^3 + Ax +B }[/math] is a square in [math]\displaystyle{ \mathbb{F}_q }[/math] and then computing the square root of this value in order to get [math]\displaystyle{ y }[/math]. Hasse's theorem tells us that [math]\displaystyle{ |E(\mathbb{F}_q)| }[/math] lies in the interval [math]\displaystyle{ (q +1 - 2 \sqrt{q}, q + 1 + 2 \sqrt{q}) }[/math]. Thus, by Lagrange's theorem, finding a unique [math]\displaystyle{ M }[/math] lying in this interval and satisfying [math]\displaystyle{ MP=O }[/math], results in finding the cardinality of [math]\displaystyle{ E(\mathbb{F}_q) }[/math]. The algorithm fails if there exist two distinct integers [math]\displaystyle{ M }[/math] and [math]\displaystyle{ M' }[/math] in the interval such that [math]\displaystyle{ MP = M'P = O }[/math]. In such a case it usually suffices to repeat the algorithm with another randomly chosen point in [math]\displaystyle{ E(\mathbb{F}_q) }[/math].

Trying all values of [math]\displaystyle{ M }[/math] in order to find the one that satisfies [math]\displaystyle{ MP=O }[/math] takes around [math]\displaystyle{ 4 \sqrt{q} }[/math] steps.

However, by applying the baby-step giant-step algorithm to [math]\displaystyle{ E(\mathbb{F}_q) }[/math], we are able to speed this up to around [math]\displaystyle{ 4 \sqrt[4]{q} }[/math] steps. The algorithm is as follows.

The algorithm

1. choose [math]\displaystyle{ m }[/math] integer, [math]\displaystyle{ m \gt  \sqrt[4]{q} }[/math]
2. FOR{[math]\displaystyle{ j=0 }[/math] to [math]\displaystyle{ m }[/math]} DO 
3.    [math]\displaystyle{ P_j \leftarrow jP }[/math]
4. ENDFOR
5. [math]\displaystyle{ L \leftarrow 1 }[/math]
6. [math]\displaystyle{ Q \leftarrow (q+1)P }[/math]
7. REPEAT compute the points [math]\displaystyle{ Q + k(2mP) }[/math]
8. UNTIL [math]\displaystyle{ \exists j }[/math]: [math]\displaystyle{ Q + k(2mP) = \pm P_j }[/math]  \\the [math]\displaystyle{ x }[/math]-coordinates are compared
9. [math]\displaystyle{ M \leftarrow q + 1 + 2mk \mp j }[/math]     \\note [math]\displaystyle{ MP=O }[/math]
10. Factor [math]\displaystyle{ M }[/math]. Let [math]\displaystyle{ p_1, \ldots, p_r }[/math] be the distinct prime factors of [math]\displaystyle{ M }[/math].
11. WHILE [math]\displaystyle{ i \leq r }[/math] DO
12.    IF [math]\displaystyle{ \frac{M}{p_i}P=O }[/math]
13.       THEN [math]\displaystyle{ M \leftarrow \frac{M}{p_i} }[/math]
14.       ELSE [math]\displaystyle{ i \leftarrow i+1 }[/math] 
15.    ENDIF
16. ENDWHILE
17. [math]\displaystyle{ L \leftarrow \operatorname{lcm}(L, M) }[/math]     \\note [math]\displaystyle{ M }[/math] is the order of the point [math]\displaystyle{ P }[/math]
18. WHILE [math]\displaystyle{ L }[/math] divides more than one integer [math]\displaystyle{ N }[/math] in [math]\displaystyle{ (q+1-2\sqrt{q},q+1+2\sqrt{q}) }[/math]
19.    DO choose a new point [math]\displaystyle{ P }[/math] and go to 1.
20. ENDWHILE
21. RETURN [math]\displaystyle{ N }[/math]     \\it is the cardinality of [math]\displaystyle{ E(\mathbb{F}_q) }[/math]

Notes to the algorithm

  • In line 8. we assume the existence of a match. Indeed, the following lemma assures that such a match exists:
Let [math]\displaystyle{ a }[/math] be an integer with [math]\displaystyle{ |a| \leq 2m^2 }[/math]. There exist integers [math]\displaystyle{ a_0 }[/math] and [math]\displaystyle{ a_1 }[/math] with
[math]\displaystyle{ -m \lt a_0 \leq m \mbox{ and } -m \leq a_1 \leq m \mbox{ s.t. } a = a_0 + 2ma_1. }[/math]
  • Computing [math]\displaystyle{ (j+1)P }[/math] once [math]\displaystyle{ jP }[/math] has been computed can be done by adding [math]\displaystyle{ P }[/math] to [math]\displaystyle{ jP }[/math] instead of computing the complete scalar multiplication anew. The complete computation thus requires [math]\displaystyle{ m }[/math] additions. [math]\displaystyle{ 2mP }[/math] can be obtained with one doubling from [math]\displaystyle{ mP }[/math]. The computation of [math]\displaystyle{ Q }[/math] requires [math]\displaystyle{ \log (q+1) }[/math] doublings and [math]\displaystyle{ w }[/math] additions, where [math]\displaystyle{ w }[/math] is the number of nonzero digits in the binary representation of [math]\displaystyle{ q+1 }[/math]; note that knowledge of the [math]\displaystyle{ jP }[/math] and [math]\displaystyle{ 2mP }[/math] allows us to reduce the number of doublings. Finally, to get from [math]\displaystyle{ Q+k(2mP) }[/math] to [math]\displaystyle{ Q+(k+1)(2mP) }[/math], simply add [math]\displaystyle{ 2mP }[/math] rather than recomputing everything.
  • We are assuming that we can factor [math]\displaystyle{ M }[/math]. If not, we can at least find all the small prime factors [math]\displaystyle{ p_i }[/math] and check that [math]\displaystyle{ \frac{M}{p_i} \neq O }[/math] for these. Then [math]\displaystyle{ M }[/math] will be a good candidate for the order of [math]\displaystyle{ P }[/math].
  • The conclusion of step 17 can be proved using elementary group theory: since [math]\displaystyle{ MP=O }[/math], the order of [math]\displaystyle{ P }[/math] divides [math]\displaystyle{ M }[/math]. If no proper divisor [math]\displaystyle{ \bar{M} }[/math] of [math]\displaystyle{ M }[/math] realizes [math]\displaystyle{ \bar{M}P=O }[/math], then [math]\displaystyle{ M }[/math] is the order of [math]\displaystyle{ P }[/math].

One drawback of this method is that there is a need for too much memory when the group becomes large. In order to address this, it might be more efficient to store only the [math]\displaystyle{ x }[/math] coordinates of the points [math]\displaystyle{ jP }[/math] (along with the corresponding integer [math]\displaystyle{ j }[/math]). However, this leads to an extra scalar multiplication in order to choose between [math]\displaystyle{ -j }[/math] and [math]\displaystyle{ +j }[/math].

There are other generic algorithms for computing the order of a group element that are more space efficient, such as Pollard's rho algorithm and the Pollard kangaroo method. The Pollard kangaroo method allows one to search for a solution in a prescribed interval, yielding a running time of [math]\displaystyle{ O(\sqrt[4]{q}) }[/math], using [math]\displaystyle{ O(\log^2{q}) }[/math] space.

Schoof's algorithm

Main page: Schoof's algorithm

A theoretical breakthrough for the problem of computing the cardinality of groups of the type [math]\displaystyle{ E(\mathbb{F}_q) }[/math] was achieved by René Schoof, who, in 1985, published the first deterministic polynomial time algorithm. Central to Schoof's algorithm are the use of division polynomials and Hasse's theorem, along with the Chinese remainder theorem.

Schoof's insight exploits the fact that, by Hasse's theorem, there is a finite range of possible values for [math]\displaystyle{ |E(\mathbb{F}_q)| }[/math]. It suffices to compute [math]\displaystyle{ |E(\mathbb{F}_q)| }[/math] modulo an integer [math]\displaystyle{ N \gt 4\sqrt{q} }[/math]. This is achieved by computing [math]\displaystyle{ |E(\mathbb{F}_q)| }[/math] modulo primes [math]\displaystyle{ \ell_1, \ldots, \ell_s }[/math] whose product exceeds [math]\displaystyle{ 4 \sqrt{q} }[/math], and then applying the Chinese remainder theorem. The key to the algorithm is using the division polynomial [math]\displaystyle{ \psi_{\ell} }[/math] to efficiently compute [math]\displaystyle{ |E(\mathbb{F}_q)| }[/math] modulo [math]\displaystyle{ \ell }[/math].

The running time of Schoof's Algorithm is polynomial in [math]\displaystyle{ n=\log{q} }[/math], with an asymptotic complexity of [math]\displaystyle{ O(n^2M(n^3)/\log{n})=O(n^{5+o(1)}) }[/math], where [math]\displaystyle{ M(n) }[/math] denotes the complexity of integer multiplication. Its space complexity is [math]\displaystyle{ O(n^3) }[/math].

Schoof–Elkies–Atkin algorithm

Main page: Schoof–Elkies–Atkin algorithm

In the 1990s, Noam Elkies, followed by A. O. L. Atkin devised improvements to Schoof's basic algorithm by making a distinction among the primes [math]\displaystyle{ \ell_1, \ldots, \ell_s }[/math] that are used. A prime [math]\displaystyle{ \ell }[/math] is called an Elkies prime if the characteristic equation of the Frobenius endomorphism, [math]\displaystyle{ \phi^2-t\phi+ q = 0 }[/math], splits over [math]\displaystyle{ \mathbb{F}_\ell }[/math]. Otherwise [math]\displaystyle{ \ell }[/math] is called an Atkin prime. Elkies primes are the key to improving the asymptotic complexity of Schoof's algorithm. Information obtained from the Atkin primes permits a further improvement which is asymptotically negligible but can be quite important in practice. The modification of Schoof's algorithm to use Elkies and Atkin primes is known as the Schoof–Elkies–Atkin (SEA) algorithm.

The status of a particular prime [math]\displaystyle{ \ell }[/math] depends on the elliptic curve [math]\displaystyle{ E/\mathbb{F}_q }[/math], and can be determined using the modular polynomial [math]\displaystyle{ \Psi_\ell(X,Y) }[/math]. If the univariate polynomial [math]\displaystyle{ \Psi_\ell(X,j(E)) }[/math] has a root in [math]\displaystyle{ \mathbb{F}_q }[/math], where [math]\displaystyle{ j(E) }[/math] denotes the j-invariant of [math]\displaystyle{ E }[/math], then [math]\displaystyle{ \ell }[/math] is an Elkies prime, and otherwise it is an Atkin prime. In the Elkies case, further computations involving modular polynomials are used to obtain a proper factor of the division polynomial [math]\displaystyle{ \psi_\ell }[/math]. The degree of this factor is [math]\displaystyle{ O(\ell) }[/math], whereas [math]\displaystyle{ \psi_\ell }[/math] has degree [math]\displaystyle{ O(\ell^2) }[/math].

Unlike Schoof's algorithm, the SEA algorithm is typically implemented as a probabilistic algorithm (of the Las Vegas type), so that root-finding and other operations can be performed more efficiently. Its computational complexity is dominated by the cost of computing the modular polynomials [math]\displaystyle{ \Psi_\ell(X,Y) }[/math], but as these do not depend on [math]\displaystyle{ E }[/math], they may be computed once and reused. Under the heuristic assumption that there are sufficiently many small Elkies primes, and excluding the cost of computing modular polynomials, the asymptotic running time of the SEA algorithm is [math]\displaystyle{ O(n^2 M(n^2)/\log{n}) = O(n^{4+o(1)}) }[/math], where [math]\displaystyle{ n=\log{q} }[/math]. Its space complexity is [math]\displaystyle{ O(n^3\log{n}) }[/math], but when precomputed modular polynomials are used this increases to [math]\displaystyle{ O(n^4) }[/math].

See also

Bibliography

  • I. Blake, G. Seroussi, and N. Smart: Elliptic Curves in Cryptography, Cambridge University Press, 1999.
  • A. Enge: Elliptic Curves and their Applications to Cryptography: An Introduction. Kluwer Academic Publishers, Dordrecht, 1999.
  • G. Musiker: Schoof's Algorithm for Counting Points on [math]\displaystyle{ E(\mathbb{F}_q) }[/math]. Available at http://www.math.umn.edu/~musiker/schoof.pdf
  • R. Schoof: Counting Points on Elliptic Curves over Finite Fields. J. Theor. Nombres Bordeaux 7:219-254, 1995. Available at http://www.mat.uniroma2.it/~schoof/ctg.pdf
  • L. C. Washington: Elliptic Curves: Number Theory and Cryptography. Chapman \& Hall/CRC, New York, 2003.

References