How to analytically calculate two-qubit entanglement witnesses?

Joint work with Abhishek Yadav.

In my PhD thesis and articles preceding it I sketched out a numerical method of finding maximal expectation values of bipartite operators over separable qubit-qudit states. In the basic form, the reasoning boils down to the following: maximization of $\langle X\rangle_{\rho^A\otimes \rho^B}$ is trivial if one of the density operators in subsystems $A$ or $B$ is fixed. For instance, let the qudit state $\rho^B$ be constant. The further optimization over $\rho^A$ yields

\begin{equation}\begin{aligned} \max_{\rho^A} \langle X\rangle_{\rho^A\otimes \rho^B} =& \lambda_{\max} \overbrace{(\Tr_B[(\mathbb{1}^A\otimes \rho^B)X])}^{X_{\rho^B}}. \end{aligned} \end{equation}

The $\lambda_{\max}$ denotes the maximal eigenvalue of a partially traced operator. As it turns out, this can be rewritten with four expectation values of qudit operators:

\begin{equation} \begin{aligned} \max_{\rho^A} \langle X\rangle_{\rho^A\otimes \rho^B} = \frac12\langle X_0\rangle_{\rho^B} + \frac12 \sqrt{\sum_{i=1}^d \langle X_i\rangle^2_{\rho^B}}, \end{aligned} \end{equation}

where $X_i = \Tr X (\sigma_i^A \otimes \mathbb{1}^B)$, and $\sigma_i$ are the Pauli matrices. Therefore, the problem can be reinterpreted in geometrical terms as finding a cone tangent to 4D joint numerical range $L(X_0, X_1, X_2, X_3)$:

Now the trick is that the isosurface of (2) in the space of $L(X_0, X_1, X_2, X_3)$ can be described by a qudratic polynomial $(x_0-2t)^2-x_1^2-x_2^2-x_3^2$. Furthermore, at least for qubit-qubit systems, the analogue of Kippenhahn theorem works for the task of defining the boundary of $L(X_0, X_1, X_2, X_3)$ in polynomial terms.

The algorithm for finding the analytical description maximal separable expectation value of two-qubit operator $X$ is now as follows.

  1. Calculate the four matrices $(X_0, X_1, X_2, X_3)$.
  2. Determine the polynomial description of $L(X_0, X_1, X_2, X_3)$ through the same method as outlined in the Kippenhahn theorem. Generically, the Gröbner basis will contain a linear polynomial (since $L$ is a 3D set embedded in 4D – it is flat!) $l$ and quadratic polynomial $q$ separately.
  3. Constrain $(x_0-2t)^2-x_1^2-x_2^2-x_3^2$ to the surface defined by the linear equation $l=0$.
  4. Find the tangent points of the isosurface defined in point 3 with the variety $q=0$.
Example

Consider the operator written in the computational basis of two qubits, $(\ket{00}, \ket{01}, \ket{10}, \ket{11})$:

\begin{equation} \begin{aligned} X = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & \sin (\theta ) & \cos (\theta ) & 0 \\ 0 & \cos (\theta ) & -\sin (\theta ) & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}. \end{aligned} \end{equation}

The relevant matrices are:

\begin{equation} \begin{aligned} X_0 = \left( \begin{array}{cc} -\sin (\theta ) & 0 \\ 0 & \sin (\theta ) \\ \end{array} \right), & ~~X_1 = \left( \begin{array}{cc} 0 & \cos (\theta ) \\ \cos (\theta ) & 0 \\ \end{array} \right),\\ X_2 =\left( \begin{array}{cc} 0 & -i \cos (\theta ) \\ i \cos (\theta ) & 0 \\ \end{array} \right), &~~ X_3=\left( \begin{array}{cc} \sin (\theta ) & 0 \\ 0 & \sin (\theta ) \\ \end{array} \right). \end{aligned} \end{equation}

The two polynomial equations define the joint numerical range $L(X_0, X_1, X_2, X_3)$:

\begin{equation} \begin{aligned} {\color{gray}q :=}~\sin ^2(\theta ) \left(-\cos ^2(\theta )+x_1^2+x_2^2\right)+x_0^2 \cos ^2(\theta ) &= 0,\\ {\color{gray}l :=}~x_3-\sin (\theta ) &= 0. \end{aligned} \end{equation}

Now we wish to find the tangent point of the variety defined by the above polynomial equations with the cone $(x_0-2t)^2-x_1^2-x_2^2-x_3^2=0$. First, we remove the $x_3$ variable using the constraint that $l=0$, leading to the constrained equation

\begin{equation} \begin{aligned} {\color{gray}c:=}~-4 t^2+4 t x_0+\sin ^2(\theta )+x_1^2-x_0^2+x_2^2 = 0. \end{aligned} \end{equation}

Now, in the space of variables $(x_0, x_1, x_2)$ the conditions for tangency are simple: at the point we are looking for, we demand that $q=c=0$ and the normal vectors are collinear, leading to the following final set of polynomial equations

\begin{equation} \begin{aligned} {\color{gray}q :=}~\sin ^2(\theta ) \left(-\cos ^2(\theta )+x_1^2+x_2^2\right)+x_0^2 \cos ^2(\theta ) &= 0,\\ {\color{gray}c:=}~-4 t^2+4 t x_0+\sin ^2(\theta )+x_1^2-x_0^2+x_2^2 &= 0,\\ {\color{gray}\nabla c \times \nabla q:=}\left(-4 x_2 \left(x_0-2 t \sin ^2(\theta )\right),4 x_1 \left(x_0-2 t \sin ^2(\theta )\right),0\right) &=\vec 0. \end{aligned} \end{equation}

By the usage of standard Gröbner bases algorithms the variables $x_0, x_1, x_2$ are removed, and only a polynomial in $t$ remains. This is the result we are looking for: its roots have the potential to be equal to maximal separable expectation value:

\begin{equation} \begin{aligned} \frac{1}{2} t \sin ^2(\theta ) \cos ^2(\theta ) \left(\left(4 t^4-1\right) \cos (2 \theta )+4 t^4+t^2 \cos (4 \theta )-3 t^2+1\right) = 0. \end{aligned} \end{equation}

The roots are $t_*=(0,\pm \sin \theta, \pm \frac12 \sec\theta)$. As evidenced by the numerical maximization results (dashed lined in the figure below), the roots indeed correspond to maximal separable expectation values – although it is unclear how to exactly choose the right one: