Sep
30

## PwnThyBytes CTF 2019 – Wrong Ring (Crypto)

Is post quantum cryptography too complex for you?

wrong_ring.sage

Summary: Ring-LWE with small error, hidden under a number field

Let us look at the main part:

prime = 1487 degree = 256 q = x^256 + prime - 1 N = PolynomialRing(RationalField(100)).quo(q) roots = q.roots(CC) # in particular order for i in range(nr_samples): a = random_vector_mod(prime, degree) a_coeff = vec2poly(a)   a_canonical = coeff_to_canonical(a_coeff, roots) out.write('a_'+str(i)+' is '+str(a_canonical)+'\n')   err_coeff = generate_error(sigma, prime, degree) b_coeff_interm = N(a_coeff * sec_coeff + err_coeff) b_coeff = 0 for j in range(degree): b_coeff = b_coeff + \ x^j*reduction_mod(b_coeff_interm.list()[j],prime)   b_canonical = coeff_to_canonical(b_coeff, roots) out.write('b_'+str(i)+' is '+str(b_canonical)+'\n')

Let $p = 1487$. Note that $a(x)$ and $s(x)$ are polynomials over $GF(p)$. However, the error polynomial $e(x)$ (given by err_coef) has real coefficients. Moreover, the function coeff_to_canonical evaluates its polynomial argument at the complex roots of the ring-defining polynomial $q(x)$. We are given 8 samples of $a(x)$ and $b(x)$ given in this form.

The high-level setting is basically a Ring-LWE scheme. The ring is $GF(p)[x]/(x^{256}-1)$), that is, polynomials are reduced by the rule $x^{256}=1$ and coefficients are reduced modulo $p$. This is very intuitive: multiplying a polynomial by $x$ simply rotates its vector of coefficients to the left.

$s(x)$ is a secret polynomial in the ring: after recovering it we can decrypt the jpeg image and read the flag. We get 8 samples of the form $a_i,b_i$, such that $b_i(x) = a_i(x)s(x)+e_i(x)$, where $a_i(x)$ is a random polynomial over $GF(p)$ and $e_i(x)$ is a vector with ‘small’ but also fractional coefficients.

Recall that we don’t get exactly the coefficients of $a_i, b_i$. Instead, we get evaluations of these polynomials at the complex roots of the polynomial $q(x)=x^{256}+p-1$, which are represented as high-precision float numbers. However, we can recover the polynomials rather precisely using the interpolation. Note that the matrix Sigma defined in the code corresponds exactly to evaluation of a polynomial on the aforementioned roots. Therefore, its inverse matrix Sigma_inv corresponds exactly to the interpolation with respect to those roots. Therefore, we can recover the polynomials $a_i(x),b_i(x)$ by multiplying the generated vectors by Sigma_inv. The precision is enough to recover $a_i$ precisely, since it has integer coefficients (i.e., we simply round the coefficients to nearest integers). However, due to the added error, $b_i(x)$ is not integral and we have to look at the magnitute of the error polynomial $e(x)$, which is generated in a very special way.

def generate_error(sigma, prime, degree): D = RealDistribution('gaussian',sigma) error = [] for i in range(degree): error.append(D.get_random_element()) error = vector(error).column() # U = 1//sqrt(2) * matrix([[1, i], [1, -i]]) error_canonical = U * error error_coeff = [] error_coeff_interm = Sigma_Inv * error_canonical for i in rang error_coeff += [error_coeff_interm[i][0].real()] error_coeff = vec2poly(error_coeff) return error_coeff

First, a vector of 256 values is sampled using Gaussian distribuiton with sigma = 1000 (average absolute value is about 800, which is rather high). Let $r_0$ and $r_1$ be its two halves respectively. Then multiplication with matrix $U$ is performed, which results in $r’ = (r_0 + ir_1 || r_0 – ir_1)$, i.e. the second half is the complex conjugate of the first half. Then, this vector is treated as the vector of values of a polynomial $e(x)$ taken on the complex roots of the polynomial $x^{256}+p-1$, which are ordered exactly such that the second half of the roots is the complex conjugate of the first half of the roots (note that all roots come in conjugate pairs). Finally, the vector $r’$ is interpolated to obtain the polynomial $e(x)$. This is done by multiplying $r’$ by the Sigma_inv matrix. The polynomial $e(x)$ is then used in the Ring-LWE scheme described above.

I haven’t figured yet a precise explanation, but it turns out that the resulting scheme of generating $e(x)$ generates polynomial with much smaller coefficients than expected. (See UPD1). In particular, the least significant coefficients have an average absolute value of about 30, and the most significant coefficients are actually much smaller than 1! As a result, we can recover precisely several top coefficients of $a_i(x)s(x) = b_i(x)-e_i(x)$ by rounding $b_i(x)$, similarly to $a_i(x)$! Each such coefficient gives us a linear relation on $s(x)$ with coefficients defined by $a(x)$. Since we have 8 samples and $s(x)$ has 256 coefficients, we need to obtain at least 32 most significant coefficients of each $b_i(x)$. Experimentally we can verify that we can safely obtain these coefficients by rounding. By taking 33 equations, we can ensure that the system is overdefined and a wrong solution is not likely to pass.

Here is the pretty simple solution code (put the values from challenge.txt in the arrays $a$ and $b$):

T = 33 t = [] eqs = [] for i in xrange(8): # interpolate a_i and b_i and round avec = [round(v.real()) for v in Sigma_Inv * a[i]] bvec = [round(v.real()) for v in Sigma_Inv * b[i]]   # generate matrix of multiplication by a_i(x) # recall that mult. by x # is simply rotation in our ring # thus we simply fill diagonals m = matrix(GF(prime), 256, 256) for ia, va in enumerate(avec): for j in xrange(256): m[(ia+j) % 256,j] = va   # take the most significant T coefficients # as linear equations eqs.extend(list(m[-T:])) t.extend(list(bvec[-T:]))   m = matrix(GF(1487), eqs) t = vector(GF(1487), t)   sol = m.solve_right(t) print("sol", sol)

UPD1: The challenge author pointed me the relevant paper with explanations about the norms and the attack: Provably Weak Instances of Ring-LWE revisited (Wouter Castryck et al.)