The separability problem in quantum theory [1] asks for a criterion to distinguish the separable states from the entangled states. It is known that separable states in ℳk ⊗ ℳm form a subset of the positive under partial transpose states (PPT states) and in low dimensions, km ≤ 6, these two sets coincide [2,3] solving the problem. However, in larger dimensions, km > 6, there are entangled PPT states. In addition, for k, m arbitrary, this problem is known to be a hard problem [4], thus any reduction of this problem to a subset of PPT states of ℳk ⊗ ℳm is certainly important.
In [5], a procedure to reduce the separability problem to a proper subset of PPT states was presented. The idea behind this reduction can be summarized as follows. Let
Now, if A is PPT and X is a positive semidefinite Hermitian eigenvector of FA ∘ GA : ℳk → ℳk, then A decomposes as a sum of states with orthogonal local supports [5, Lemma 8], i.e.,
Then the algorithm proceeds to decompose (V ⊗ W)A(V ⊗ W ) and (V⊥ ⊗ W⊥)A(V⊥ ⊗ W⊥), since they are also PPT, whenever such X is found. Eventually this process stops with
Positive under partial transpose states are not the only type of states on which this procedure works because the key feature of this procedure, which is A ∈ ℳk ⊗ ℳm breaks whenever a certain positive semidefinite Hermitian eigenvector is found, is also true for two other types of quantum states. From now on we say that A ∈ ℳk ⊗ ℳm has the complete reducibility property, if for every positive semidefinite Hermitian eigenvector X of FA ∘ GA : ℳk → ℳk, we have
It is not hard to compute a single positive semidefinite Hermitian eigenvector for FA ∘ GA : ℳk → ℳk, when A is positive semidefinite. In the next section, we shall see that that under this hypothesis GA : ℳk → ℳm and FA : ℳm → ℳk are adjoint positive maps. So if λ is the largest positive eigenvalue of FA ∘ GA, then
Now, the number of times that A breaks into weakly irreducible pieces is maximized when the non-null eigenvalues of FA ∘ GA : ℳk → ℳk are equal, because in this case we are able to produce many positive semidefinite Hermitian eigenvectors. In this situation, we have the following theorem:
If the non-null eigenvalues of FA ∘ GA are equal, then A is separable, when A is PPT or SPC or invariant under realignment [5, Proposition 15]. Notice that the eigenvalues of FA ∘ GA are the square of the Schmidt coefficients of A.
In [5, Lemma 42], it was also noticed that every SPC state and every invariant under realignment state is PPT in ℳ2 ⊗ ℳ2. Before that in [6], it was proved that a state supported on the symmetric subspace of ℂk ⊗ ℂk is PPT if and only if it is SPC. This is plenty of evidence of how linked these three types of quantum states are.
In this work we prove new results for this triad of quantum states and a new consequence for their complete reducibility property. One of these results concerns the separability of these states. The aforementioned connections along with our new results lead us to notice a certain triality pattern:
For each proven theorem for one of these three types of states, there are corresponding counterparts for the other two.
We believe that a solution to the separability problem for SPC states or invariant under realignment states would shed light on bound entanglement. This is our reason for studying these types of states.
Next, we would like to point out the origin of these connections which is also the source of the main tools used in this work. For this, we must consider the group of linear contractions [7] and some of its properties. For each permutation σ ∈ S4, the linear transformation Lσ : ℳk ⊗ ℳk → ℳk ⊗ ℳk satisfying
The term contraction comes from the fact that ‖Lσ(γ)‖1 ≤ ‖γ‖1, for every σ ∈ S4, whenever γ ∈ ℳk ⊗ ℳk is a separable state and ‖ ⋅ ‖1 is the trace norm of a matrix (i.e., the sum of its singular values). Hence, if γ ∈ ℳk ⊗ ℳkis a state and ‖Lσ(γ)‖1 > ‖γ‖1 for some σ ∈ S4, then γ ∈ ℳk ⊗ ℳk is entangled. This observation provides two useful criteria for entanglement detection. In cycle notation, they are:
Despite the name – contraction – these linear maps are in fact isometries, as they map an orthonormal basis {
In addition, the set of linear contractions is a group under composition generated by the partial transposes (L(34)(γ) = γΓ and L(12)) and the realignment map (L(23)(γ) = ℛ(γ)). The relations among the elements of this group are extremely useful as we shall see in the proofs of our novel results.
Another very useful fact about the realignment map is that this map is a homomorphism with respect to the usual matrix product and a new product which generalizes the Hadamard product (see item 8) of Lemma 1).
Finally, these maps allow connecting this triad of quantum states from their origins, that is, their definitions:
PPT states are the states that remain positive under partial transpose (γ ≥ 0, γΓ ≥ 0) [2,3],
SPC states are the states that remain positive under partial transpose composed with realignment (γ ≥ 0, ℛ(γΓ) ≥ 0) ([5, corollary 25] and [5, definition 17]),
invariant under realignment states are the states that remain the same under realignment (γ ≥ 0, ℛ(γ) = γ).
These observations on the group of linear contractions are used throughout this paper; they are the keys to obtain our novel results. Now, let us describe these results.
Our first result is an upper bound on the spectral radius for this triad of states. We show that if
Our second result regards the filter normal form of SPC states, invariant under realignment states and certain types of PPT states. This normal form has been used in entanglement theory, for example, to provide a different solution for the separability problem in ℳ2 ⊗ ℳ2 [10,11,12] or to prove the equivalence of some criteria for entanglement detection [13]. Here we show that states which are SPC or invariant under realignment can be put in the filter normal form and their normal forms can still be chosen to be SPC and invariant under realignment, respectively. In other words, if A, B ∈ ℳk ⊗ ℳk are states such that
- (1)
A is SPC then there is an invertible matrix R ∈ ℳk such that
, where(R \otimes R)A{(R \otimes R)^*} = \sum\nolimits_{i = 1}^n {\lambda _i}{\delta _i} \otimes {\delta _i} ,{\lambda _1} = {1 \over k} , λi > 0 and δi is Hermitian for every i, and tr(δiδj) = 0 for i ≠ j.{\delta _1} = {{Id} \over {\sqrt k }} - (2)
B is invariant under realignment then there is an invertible matrix R ∈ ℳk such that
, where(R \otimes \overline R )B{(R \otimes \overline R )^*} = \sum\nolimits_{i = 1}^n {\lambda _i}{\delta _i} \otimes \overline {{\delta _i}} ,{\lambda _1} = {1 \over k} , λi > 0 and δi is Hermitian for every i, and tr(δiδj) = 0 for i ≠ j.{\delta _1} = {{Id} \over {\sqrt k }}
Our PPT counterpart to these two results, Theorem 3, says that every PPT state whose rank is equal to its two reduced ranks can be put in the filter normal form, which we believe is a novel contribution to the filter normal form literature.
We are not aware of any general algorithm capable of determining whether a given state can be put in the filter normal form or not. From this point of view, these results are not only novel but are highly relevant to the literature on the filter normal form.
Our final result is a lower bound for the ranks of these three types of states. We show that the rank of PPT states, SPC states and invariant under realignment states cannot be inferior to their reduced ranks (when they are equal) and whenever this minimum is attained the states are separable.
In [14], it was proved that a state γ such that rank(γ) ≤ max{rank(γA), rank(γB)} is separable by just being positive under partial transpose. In [15], it was shown that the rank of a separable state is greater or equal to its reduced ranks. So if γ is PPT and rank(γ) ≤ max{rank(γA), rank(γB)}, then it is separable and rank(γ) = max{rank(γA), rank(γB)}.
Thus, our last result is known for PPT states, but it is original for SPC states and invariant under realignment states. The proofs presented here for these results are completely original. We use the facts that every PPT state, SPC state and invariant under realignment state with minimal rank can be put in the filter normal form together with the complete reducibility property to finish the proofs. These results show a nice interplay between the filter normal form and the complete reducibility property.
It is quite surprising that all three of these types of states guarantee separability of their states under such condition. In the entanglement literature, the opposite is usually the case.
For example, it is known that any given bipartite state γ satisfies
The results described above show how fundamental the complete reducibility property is to entanglement theory, it acts as a unifying approach for many results. Another recent one on entanglement breaking Markovian dynamics was described in [18]. In fact, even outside entanglement theory, we can find consequences of that property, for example, a new proof of Weiner’s theorem [19] on mutually unbiased bases found in [5].
The triality pattern described above together with the complete reducibility property form a potential source of information for entanglement theory.
This paper is organized as follows. In Section 2, we present some preliminary results, which are mainly facts about the group of linear contractions. In Section 3, we obtain an upper bound for the spectral radius of our special triad of quantum states. In Section 4, we show that SPC states and invariant under realignment states can be put in the filter normal form and their normal forms retain their shape. In addition, we show that PPT states with minimal rank can also be put in the filter normal form. In Section 5, we prove that the ranks of our triad of quantum states cannot be inferior to their reduced ranks and whenever this minimum is attained the states are separable.
In this section we present some preliminary results. We begin by noticing that GA : ℳk → ℳm and FA : ℳm → ℳk defined in the introduction are adjoint maps with respect to the trace inner product, when A is Hermitian. The reason behind this is quite simple: If A is Hermitian, then FA(Y )* = FA*(Y*) = FA(Y*), for every Y ∈ ℳm, hence
Notice that for positive semidefinite Hermitian matrices X ∈ ℳk, Y ∈ ℳm and A ∈ ℳk ⊗ ℳm, we have tr(A(X ⊗ Y*)) ≥ 0. Thus, the equality above also shows that GA and FA are positive maps (Definition 2) when A is positive semidefinite. So in this case FA ∘ GA : ℳk → ℳk is a self-adjoint positive map.
These maps GA : ℳk → ℳm and FA : ℳm → ℳk are connected to the following generalization of the Hadamard product extensively used in [20] and required here a few times.
(Generalization of the Hadamard product). Let
Let us recall some facts regarding this product.
Let
- a)
Notice that
, where Bi, Cj ∈ ℳk. Therefore,{u^t}({B_i} \otimes {C_j})u = tr(({B_i} \otimes {C_j})u{u^t}) = tr({B_i}C_j^t) which implies that γ * δ is positive semidefinite, whenever γ, δ are positive semidefinite. In addition,\gamma *\delta = (I{d_{m \times m}} \otimes {u^t} \otimes I{d_{s \times s}})(\gamma \otimes \delta )(I{d_{m \times m}} \otimes u \otimes I{d_{s \times s}}), .tr(\gamma *\delta ) = tr(\gamma \otimes \delta \;(Id \otimes u{u^t} \otimes Id)) = tr({\gamma _B} \otimes {\delta _A}\;u{u^t}) = tr({\gamma _B}\delta _A^t) - b)
By [20, Proposition 8], γ * δ = (Fγ ((⋅)t) ⊗ Id)(δ) = (Id ⊗ Gδ((⋅)t))(γ).
- c)
Let F = (uut)Γ ∈ ℳk ⊗ ℳk and notice that if
, then\gamma = \sum\nolimits_{i = 1}^n {A_i} \otimes {B_i} \in {{\cal M}_k} \otimes {{\cal M}_k} . This real symmetric matrix F is a unitary matrix and it is usually called the flip operator.F\gamma F = \sum\nolimits_{i = 1}^n {B_i} \otimes {A_i}
Now, notice that
Next, we discuss some facts about the group of linear contractions. Actually, we need to focus only on three of these maps. The linear contractions important to us are
Below we discuss several properties of these linear contractions such as relations among these elements and how they behave with respect to the product defined in Definition 1.
Let γ, δ ∈ ℳk ⊗ ℳk and v = Σi ai ⊗ bi, w = Σj cj ⊗ dj ∈ ℂk ⊗ ℂk.
- (1)
ℛ(vwt) = V ⊗ W, where
,V = \sum\nolimits_i {a_i}b_i^t .W = \sum\nolimits_j {c_j}d_j^t - (2)
ℛ(ℛ(γ)) = γ
- (3)
ℛ((V ⊗ W )γ(M ⊗ N )) = (V ⊗ Mt)ℛ(γ)(Wt ⊗ N)
- (4)
ℛ(γF )F = γΓ
- (5)
ℛ(γΓ) = ℛ(γ)F
- (6)
ℛ(γF ) = ℛ(γ)Γ
- (7)
ℛ(γΓ)Γ = γF
- (8)
ℛ(γ * δ) = ℛ(γ)ℛ(δ) (i.e., ℛ is a homomorphism).
- (9)
{\cal R}(F\overline \gamma F) = {\cal R}{(\gamma )^*}
Items (1–6) were proved in items (2–7) of [5, Lemma 23]. For the other three items, we just need to prove them when γ = abt ⊗ cdt and δ = eft ⊗ ght, where a, b, c, d, e, f, g, h ∈ ℂk.
Item (7): ℛ(abt ⊗ cdt)Γ)Γ = ℛ(abt ⊗ dct)Γ = (adt ⊗ bct)Γ = adt ⊗ cbt.
Now, (abt ⊗ cdt)F = adt ⊗ cbt. So ℛ(γΓ)Γ = γF.
Item (8): ℛ(abt ⊗ cdt * eft ⊗ ght) = ℛ(abt ⊗ ght)(dtf)(cte) = (agt ⊗ bht)(dtf)(cte).
Now, ℛ(abt ⊗ cdt)ℛ(eft ⊗ ght) = (act ⊗ bdt)(egt ⊗ fht) = (agt ⊗ bht)(cte)(dtf).
So ℛ(γ * δ) = ℛ(γ)ℛ(δ)
Item (9):
Now,
So
The next lemma is important for Theorem 3, it says something very interesting about PPT states which remain PPT, under realignment: They must be invariant under realignment.
Let γ ∈ ℳk ⊗ ℳk be a positive semidefinite Hermitian matrix. If γ and ℛ(γ) are PPT, then γ = ℛ(γ).
By item (4) of Lemma 1, γΓ = ℛ(γF )F. Now, since F2 = Id, γΓF = ℛ(γF ). Next, by item (6) of Lemma 1, ℛ(γF ) = ℛ(γ)Γ. So γΓF = ℛ(γ)Γ is a positive semidefinite Hermitian matrix by hypothesis. Since F, γΓ and γΓF are Hermitian matrices, γΓF = FγΓ. So there is an orthonormal basis of ℂk ⊗ ℂk formed by symmetric and anti-symmetric eigenvectors of γΓ. Remind that γΓ and γΓF are positive semidefinite, hence γΓ = γΓF. Finally, we have noticed that γΓF = ℛ(γ)Γ. So γΓ = ℛ(γ)Γ, which implies γ = ℛ(γ).
The next lemma is used in this work a few times. Although simple, we state it here in order to better organize our arguments.
Let γ ∈ ℳk ⊗ ℳk. The singular values of the linear map Gγ : ℳk → ℳk and the matrix ℛ(γ) ∈ ℳk ⊗ ℳk are equal and their largest singular values can be computed as
In addition, if γ is Hermitian, then there are Hermitian matrices γ1, δ1 ∈ ℳk such that ‖ℛ(γ)‖∞ = tr(γ(γ1 ⊗ δ1)), where ‖γ1‖2 = ‖δ1‖2 = 1.
It was proved in item 1) of [5, Lemma 23] that
Now, by item 5) of Lemma 1, ℛ((γ*)Γ) = ℛ(γ*)F, where F is the flip operator, which is unitary. Hence the singular values of ℛ((γ*)Γ) and ℛ(γ*) are the same. Next, items 2) and 9) of Lemma 1 imply that
In addition, Gγ : ℳk → ℳk is the adjoint of Fγ*: ℳk → ℳk, thus Gγ and Fγ* have the same singular values. Therefore, the singular values of the linear map Gγ : ℳk → ℳk and the matrix ℛ(γ) ∈ ℳk ⊗ ℳk are equal. Thus, ‖Gγ ‖∞ = ‖ℛ(γ)‖∞. In order to show that their largest singular values can be computed as stated above, recall that the largest singular value of Gγ : ℳk → ℳk is equal to
This proves the first part of the lemma. Now for the second part, if γ is Hermitian, then the set of Hermitian matrices is left invariant by Gγ : ℳk → ℳk. Therefore, there is an Hermitian matrix γ1 ∈ ℳk such that ‖γ1‖2 = 1 and ‖Gγ(γ1)‖2 = the largest singular value of Gγ. Notice that ‖Gγ (γ1)‖2 = tr(Gγ (γ1)δ1) = tr(γ(γ1 ⊗ δ1)), where δ1 = Gγ (γ1)/‖Gγ (γ1)‖2. So ‖ℛ(γ)‖∞ = tr(γ(γ1 ⊗ δ1)), where ‖γ1‖2 = ‖δ1‖2 = 1 and γ1, δ1 are Hermitian matrices.
Now, we have all the preliminary results required to discuss our new results.
In this section we obtain an upper bound for the spectral radius of PPT states, SPC states and invariant under realignment states (Theorem 1). In order to prove this theorem, two lemmas are required.
Our first lemma says that the largest singular value of the partial transpose of any state cannot exceed the largest singular values of its reduced states or of its own realignment.
Let γ ∈ ℳk ⊗ ℳk be any positive semidefinite Hermitian matrix. Then
Let v ∈ ℂk ⊗ ℂk be a unit vector such that |tr(γΓvv*)| = ‖γΓ‖∞. Let n be its rank. Denote by {g1, …, gk} and {e1, …, en} the canonical bases of ℂk and ℂn, respectively. Next, there are matrices D ∈ ℳk×k, E ∈ ℳk×k, R ∈ ℳk×n and S ∈ ℳk×n such that
v = (D ⊗ Id)w, where tr(DD*) = 1,
v = (Id ⊗ E)w, where tr(E̅Et) = 1,
v = (R ⊗ S)u, where
Now,
- (1)
‖γΓ‖∞ = |tr(γΓvv*)| = |tr((D* ⊗ Id)γ(D ⊗ Id)(ww*)Γ)| ≤ tr((D* ⊗ Id)γ(D ⊗ Id)), since Id ⊗ Id ± (ww*)Γ and γ are positive semidefinite. Hence
{\left\| {\gamma ^\Gamma }\right\|_\infty } \le tr(\gamma (D{D^*} \otimes Id)) = tr({\gamma _A}D{D^*}) \le {\left\| {\gamma _A}\right\|_\infty }tr(D{D^*}) = {\left\| {\gamma _A}\right\|_\infty }. - (2)
‖γΓ‖∞ = |tr(γΓvv*)| = |tr((Id ⊗ Et)γ(Id ⊗ Ē)(ww*)Γ)| ≤ tr((Id ⊗ Et)γ(Id ⊗ Ē)), since Id ⊗ Id ± (ww*)Γ and γ are positive semidefinite. Hence
{\left\| {\gamma ^\Gamma }\right\|_\infty } \le tr(\gamma (Id \otimes \overline E {E^t})) = tr({\gamma _B}\overline E {E^t}) \le {\left\| {\gamma _B}\right\|_\infty }tr(\overline E {E^t}) = {\left\| {\gamma _B}\right\|_\infty }. - (3)
‖γΓ‖∞ = |tr(γΓvv*)| = |tr((R* ⊗ St)γ(R ⊗ S̅)(uu*)Γ)| ≤ tr ((R* ⊗ St)γ(R ⊗ S̅)), since Id ⊗ Id ± (uu*)Γ and γ are positive semidefinite. Hence
{\left\| {\gamma ^\Gamma }\right\|_\infty } \le tr(\gamma (R{R^*} \otimes \overline S {S^t})) \le {\left\| {\cal R}(\gamma )\right\|_\infty },{\rm{since}}{\left\| R{R^*}\right\|_2} = {\left\| \overline S {S^t}\right\|_2} = 1,\;{\rm{by}}\;{\rm{Lemma}}\;{\rm{3}}.
Our next lemma shows that the largest singular value of the realignment of any state cannot exceed the geometric mean of the largest singular values of its reduced states.
Let γ ∈ ℳk ⊗ ℳk be a positive semidefinite Hermitian matrix. Then
By Lemma 3, there are Hermitian matrices γ1 ∈ ℳk and δ1 ∈ ℳk such that
tr(γ(γ1 ⊗ δ1)) = ‖ℛ(γ)‖∞.
Consider the following positive semidefinite Hermitian matrix
Its partial trace,
Thus
Notice that
,tr(\gamma (I{d_k} \otimes \delta _1^2)) = tr({\gamma _B}\delta _1^2) \le {\left\| {\gamma _B}\right\|_\infty }tr(\delta _1^2) = {\left\| {\gamma _B}\right\|_\infty } ,tr(\gamma (\gamma _1^2 \otimes I{d_m})) = tr({\gamma _A}\gamma _1^2) \le {\left\| {\gamma _A}\right\|_\infty }tr(\gamma _1^2) = {\left\| {\gamma _A}\right\|_\infty } .tr{(\gamma ({\gamma _1} \otimes {\delta _1}))^2} = {\left\| {\cal R}(\gamma )\right\|_\infty ^2}
Hence
These lemmas imply the first new connection for our special triad of quantum states. This theorem says that the spectral radius of any state that is PPT or SPC or invariant under realignment cannot exceed the largest singular values of its reduced states or of its own realignment.
Let γ ∈ ℳk ⊗ ℳk be a positive semidefinite Hermitian matrix. If γ is PPT or SPC or invariant under realignment, then
First, let γ be a PPT state. Hence γΓ is also a state. Notice that (γΓ)A = γA,
By applying Lemma 4 on γΓ, we obtain
So the proof of the PPT case is complete. Next, if γ is SPC or invariant under realignment, then γA = γB or
It remains to prove that ‖γ‖∞ ≤ ‖ℛ(γ)‖∞, whenever γ is SPC or invariant under realignment. Notice that this inequality is trivial for matrices invariant under realignment. Thus, let γ be an SPC state. As defined in the introduction, ℛ(γΓ) is positive semidefinite. Applying Lemma 4 on ℛ(γΓ), we obtain
Now, by items (7) and (2) of Lemma 1, ℛ(γΓ)Γ = γF and ℛ(ℛ(γΓ)) = γΓ, where F is the flip operator. Therefore
Finally, ‖γΓ‖∞ ≤ ‖ℛ(γ)‖∞ by Lemma 4, and ‖γ‖∞ = ‖γF ‖∞, since F is unitary.
In this section we show that every SPC state and every invariant under realignment state can be put in the filter normal form. In addition their filter normal forms can still be chosen to be SPC and invariant under realignment, respectively (Theorem 2).
Then we show that every PPT state whose rank is equal to its two reduced ranks can be put in the filter normal form (see Theorem 3).
As described in the introduction, there are applications of this normal form in entanglement theory. Now, it has been noticed that this normal form is connected to an extension of Sinkhorn-Knopp theorem for positive maps [4,21]. This theorem concerns the existence of invertible matrices R, S such that R*T (SXS*)R is doubly stochastic for a positive map T (X) satisfying suitable conditions. So we start this section with some definitions and lemmas related to this theorem.
Let V ∈ ℳk be an orthogonal projection and consider the sub-algebra of ℳk : V ℳkV = {V XV, X ∈ ℳk}. Let Pk denote the set of positive semidefinite Hermitian matrices of ℳk.
Let us say that T : V ℳkV → V ℳkV is a positive map if T (X) ∈ Pk ∩ V ℳkV for every X ∈ Pk ∩ V ℳkV. In addition, we say that a positive map T : V ℳkV → V ℳkV is doubly stochastic if the following equivalent conditions hold:
- (1)
the matrix Am×m, defined as
, is doubly stochastic for every choice of orthonormal bases v1, …, vm and w1, …, wm of Im(V),{A_{ij}} = tr(T({v_i}v_i^*){w_j}w_j^*) - (2)
T (V ) = T*(V ) = V, where T* : V ℳkV → V ℳkV is the adjoint of T : V ℳkV → V ℳkV with respect to the trace inner product.
A positive map T : V ℳkV → V ℳkV is said to be fully indecomposable if the following equivalent conditions hold:
- (1)
the matrix Am×m, defined as
, is fully indecomposable [22] for every choice of orthonormal bases v1, …, vm and w1, …, wm of Im(V ),{A_{ij}} = tr(T({v_i}v_i^*){w_j}w_j^*) - (2)
rank(X) + rank(Y ) < rank(V ), whenever X, Y ∈ (V ℳkV ∩ Pk) ∖ {0}and tr(T (X)Y ) = 0,
- (3)
rank(T (X)) > rank(X), ∀X ∈ V ℳkV ∩ Pk such that 0 < rank(X) < rank(V).
Below we prove two lemmas concerning self-adjoint maps with respect to the trace inner product.
Let T : V ℳkV → V ℳkV be a fully indecomposable self-adjoint map. There is R ∈ V ℳkV such that R*T (R(⋅)R*)R : V ℳkV → V ℳkV is doubly stochastic.
Since T : V ℳkV → V ℳkV is fully indecomposable, it has total support [21, Lemma 2.3] or it has a positive achievable capacity [4]. So there are matrices A, B ∈ V ℳkV such that rank(A) = rank(B) = rank(V ) and T1(X) = B*T (AXA*)B is doubly stochastic [21, Theorem 3.7]. Notice that T1 still is fully indecomposable. Now,
Let C = EF G* and D = HLJ* be the SVD decompositions of C and D, where
E = (e1, …, em), G = (g1, …, gm), H = (h1, …, hm), J = (j1, …, jm) ∈ ℳk×m and the columns of each of these matrices form an orthonormal basis of Im(V). F = diagonal(f1, …, fm), L = diagonal(l1, …, lm) and fi > 0, li > 0 for every i.
Next, define R, S ∈ ℳm×m as
By Equation (1),
Thus, L2, F2 are positive diagonal matrices such that L2SF2 is doubly stochastic by Definition 2. Recall that S is a fully indecomposable matrix by Definition 3.
Since S is fully indecomposable, by a theorem proved in [23], the diagonal matrices L2 and F2 such that L2SF2 is doubly stochastic must be unique up to multiplication by positive numbers, but Id.S.Id is also doubly stochastic.
Thus, L = a−2Id and F = a2Id for some a > 0.
Therefore, B+A = C = a2U, where U = EG*. Notice that U V U* = V.
In addition, BB+A = a2BU. Since BB+ = V and V A = A, we obtain A = a2BU.
Thus,
Finally, (aB)*T ((aB)(⋅)(aB)*)(aB) is doubly stochastic too, since V = U V U*.
Let T : V ℳkV → V ℳkV be a self-adjoint positive map such that v ∉ ker(T (vv*)) for every
This proof is an induction on the rank(V). If rank(V) = 1, then V ℳkV = {λvv*, λ ∈ ℂ}. Thus, T (vv*) = μvv*, where μ > 0 by hypothesis. Define
Let rank(V ) = n > 1 and assume the validity of this theorem whenever the rank of the orthogonal projection is less than n. Consider all pairs of orthogonal projections (V1, W1) such that
Since there is no
If for every aforementioned pair (V1, W1), we have rank(V1) + rank(W1) < rank(V), then T is fully indecomposable by Definition 3. So the result follows by Lemma 6. Let us assume that there is such a pair (V1, W1) satisfying rank(V1) + rank(W1) = rank(V). Since
Next, since T is self-adjoint so is T′, hence tr(T′(V − V1)V1) = 0. These last two equalities imply that
Of course the restrictions T′|V1ℳkV1 and T′|(V−V1)ℳk(V−V1) are self-adjoint and there is no
By induction hypothesis, there are R1 ∈ V1ℳkV1 and R2 ∈ (V − V1)ℳk(V − V1) such that
Set R = R1 + R2 ∈ V ℳkV. Notice that T″(X) = R*T′(RXR*)R is self-adjoint and
Hence, T″ : V ℳkV → V ℳkV is doubly stochastic.
Let A ∈ ℳk⊗ℳk be an Hermitian matrix such that GA : ℳk → ℳk is a self-adjoint positive map and tr(A(vv* ⊗ vv*)) > 0 for every v ∈ ℂk ∖{0}. There is an invertible matrix R ∈ ℳk such that
- (1)
λ1 = 1 and
{\gamma _1} = {{Id} \over {\sqrt k }} - (2)
λi ∈ ℝ and
for every i,{\gamma _i} = \gamma _i^* - (3)
1 ≥ |λi| for every i,
- (4)
tr(γiγj) = 0 for every i ≠ j and
for every i.tr(\gamma _i^2) = 1
By the definition of GA : ℳk → ℳk (given in the introduction), notice that
Hence v ∉ ker GA(vv*) for every v ∈ ℂk ∖ {0}.
By Lemma 7, there is an invertible matrix R ∈ ℳk such that R*GA(RXR*)R is doubly stochastic.
Define B = (R* ⊗ R*)A(R ⊗ R) and notice that GB(X) = R*GA(RXR*)R. Therefore, GB is a self-adjoint doubly stochastic map, i.e.,
Let
GB(γi) = λiγi, where |λi| > 0 for 1 ≤ i ≤ n
GB(γi) = 0, for i > n.
Since GB is a positive map satisfying GB(Id) = Id, its spectral radius is 1 [24, Theorem 2.3.7]. So |λi| ≤ 1 for every i.
Finally, by the definition of GB,
The next theorem shows that SPC states and invariant under realignment states can be put in the filter normal form retaining their SPC structure and invariance under realignment, respectively.
Let γ ∈ ℳk ⊗ ℳk be a positive semidefinite Hermitian matrix such that rank(γA) = k. There is an invertible matrix R ∈ ℳk such that
- (1)
, if ℛ(γΓ) is positive semidefinite;({R^*} \otimes {R^*})\gamma (R \otimes R) = \sum\nolimits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes {\gamma _i} - (2)
, if ℛ(γ) is positive semidefinite,({R^*} \otimes {R^t})\gamma (R \otimes \overline R ) = \sum\nolimits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes \overline {{\gamma _i}} where
- a)
and{\lambda _1} = {1 \over k} {\gamma _1} = {{Id} \over {\sqrt k }} - b)
and{1 \over k} \ge {\lambda _i} > 0 for every i,{\gamma _i} = \gamma _i^* - c)
tr(γiγj) = 0 for every i ≠ j and
for every i.tr(\gamma _i^2) = 1
- a)
(1) If γ is a state such that ℛ(γΓ) is positive semidefinite, then, by [5, corollary 25], γ can be written as
Hence
Now, let v ∈ ℂk be such that
Since ai > 0 and tr(Bi vv*) ∈ ℝ for every i, tr(Bi vv*) = 0 for every i. Therefore,
By hypothesis γA is positive definite, hence v = 0.
So, by Corollary 1, there is a invertible matrix R such that
Finally, since Gγ has only non-negative eigenvalues and λ1, …, λn are non-null eigenvalues of R*Gγ (RXR*)R (as seen in the proof of Corollary 1), λ1, …, λn are positive.
(2) If γ is a state such that ℛ(γ) is positive semidefinite, then, by [5, Corollary 25], γ can be written as
Consider
The next corollary says that after a local operation any state possess a Schmidt decomposition such that one of its matrices is a multiple of the identity.
Let γ ∈ ℳk ⊗ ℳk be a state such that rank(γA) = k. There is an invertible matrix R ∈ ℳk such that
- a)
a1 ≥ ai > 0, for every 1 ≤ i ≤ n, and
,{\gamma _1} = {{Id} \over {\sqrt k }} - b)
,{\gamma _i} = \gamma _i^* for every i,{\delta _i} = \delta _i^* - c)
tr(γiγj) = tr(δiδj) = 0 for every i ≠ j and
.tr(\gamma _i^2) = tr(\delta _i^2) = 1
First, since γ is a state, so is
Now, by items (8) and (9) of Lemma 1,
It is not difficult to check that
By item (2) of Theorem 2, there is an invertible matrix R ∈ ℳk such that
- a)
and{\lambda _1} = {1 \over k} ,{\gamma _1} = {{Id} \over {\sqrt k }} - b)
and{1 \over k} \ge {\lambda _i} > 0 for every i,{\gamma _i} = \gamma _i^* - c)
tr(γiγj) = 0 for every i ≠ j and
for every i.tr(\gamma _i^2) = 1
Define δ = (R* ⊗ Id)γ(R ⊗ Id) and notice that
Thus,
By item c) of Remark 1,
By [24, Theorem 2.3.7], λ1 is the largest eigenvalue of the positive map Fδ ∘ Gδ. So
Next, let
Notice that
If Gδ(δi) ≠ 0k×k, for 1 ≤ i ≤ n, then define ai = ‖Gδ(δi)‖2 > 0. Thus,
Notice that
In addition, since Fδ(Gδ(δj))) is a multiple of δj, δi is orthogonal to Fδ(Gδ(δj))) for i ≠ j,
Finally,
It remains to check that
There is λ > 0 such that Id − λY is positive semidefinite, so Fγ (Gγ (Id − λY )) is positive semidefinite too. Thus, Fγ (Gγ (Id)) = Fγ (Gγ (Id − λY )) + λγA, which is positive definite.
Finally, since γ and
Therefore
For the last result of this section, we finally prove that every PPT state whose rank coincides with its two reduced ranks can be put in the filter normal form. This is the key result to prove Theorem 6. See how the relations between linear contractions (Proposition 1) are important in proving this result.
If γ ∈ ℳk ⊗ ℳk is a PPT state such that rank(γ) = rank(γA) = rank(γB) = k, then there are invertible matrices R, S ∈ ℳk such that
First, define γ1 = (Id ⊗ S)γ(Id ⊗ S*) such that
Since γ1 is also PPT,
Hence
Notice that
Now, by items (8) and (9) of Lemma 1,
Next, on the one hand
On the other hand
Therefore,
Since
We have just discovered that
Now, by Corollary 2, there is an invertible matrix R ∈ ℳk such that
λ1 ≥ λi > 0 for every i and
tr(AiAj) = tr(BiBj) = 0 for every i ≠ j and
In addition, we can normalize its trace, so assume that tr(γ2) = 1.
Hence,
As
Next,
Notice that the largest singular value of Gγ2 is λ1 by item a) above and the definition of Gγ2. Hence, by Lemma 3, ‖ℛ(γ2)‖∞ = λ1. Moreover, remind that rank(γ2) = rank(γ1) = rank(γ) = k. Therefore,
The inequalities above are, in fact, equalities, which only happens when all the k non-null eigenvalues of γ2 are equal to λ1. Therefore, 1 = tr(γ2) = kλ1. So
So
Finally, notice that γ2 = (R* ⊗ Id)γ1(R ⊗ Id) = (R* ⊗ S)γ(R ⊗ S*). The proof is complete.
In this section, we prove that rank(γ) ≥ k, whenever a state γ is PPT or SPC or invariant under realignment and rank(γA) = rank(γB) = k. Then we show that if rank(γ) = k, then γ is separable in each of these cases.
Explicit examples of states satisfying the hypotheses of these theorems are the separable states
We start this section by proving the SPC and the invariant under realignment cases. In their proofs we use the result that guarantees their separability whenever their non-null Schmidt coefficients are equal, which follows from the complete reducibility property [5, Proposition 15].
Before proving the inequality, notice that by the symmetry of their Schmidt decompositions [5, Corollary 25], γB = γA or
If γ ∈ ℳk ⊗ ℳk is an SPC state such that rank(γA) = k, then rank(γ) ≥ k. In addition, if the equality holds, then γ is separable.
By Theorem 2, there is a invertible matrix R such that
Now, notice that δ still is SPC by [5, Corollary 25]. Hence
So
Since R is invertible, rank(γ) = rank(δ) ≥ k. For the next part, assume rank(γ) = k. Then rank(δ) = k. Therefore,
Since the equality tr(δ) = ‖δ‖∞ rank(δ) holds, the non-null eigenvalues of δ are equal to ‖δ‖∞. So tr(δ) = k‖δ‖∞ = 1. Hence
Next, since the linear contraction – R((⋅)Γ) – preserves the Frobenius norm of δ,
By item (5) of Lemma 1, ℛ(δΓ) = ℛ(δ)F. Since F is unitary, ‖ℛ(δ)F ‖∞ = ‖ℛ(δ)‖∞. Therefore,
Now, since δ is SPC, by its definition, ℛ(δΓ) is positive semidefinite. Hence
By item (7) of Lemma 1, ℛ(δΓ)Γ = δF. Thus, ‖ℛ(δΓ)‖1 ≤ ‖ℛ(δΓ)Γ‖1 = ‖δF ‖1 = ‖δ‖1 = 1. Using these pieces of information in Equation (7) we obtain
Again, tr(ℛ(δΓ)2) = ‖ℛ(δΓ)‖∞‖ℛ(δΓ)‖1 only holds if the non-null eigenvalues of the positive semidefinite Hermitian matrix ℛ(δΓ) are equal to
Finally, the non-null eigenvalues of ℛ(δΓ) are the singular values of Gγ, which are the non-null Schmidt coefficients of δ, as explained in the proof of Lemma 3. Since δ is SPC and its non-null Schmidt coefficients are equal, δ is separable by [5, Proposition 15]. Since R is invertible, γ is separable too.
The invariant under realignment counterpart is proved next in a similar way with minor modifications.
If γ ∈ ℳk ⊗ ℳk is an invariant under realignment state such that rank(γA) = k, then rank(γ) ≥ k. In addition, if the equality holds, then γ is separable.
By Theorem 2, there is a invertible matrix R such that
Now, by item (3) of Lemma 1,
Thus, δ is invariant under realignment. Hence
So
Since R is invertible, rank(γ) = rank(δ) ≥ k. For the next part, assume rank(γ) = k. Then rank(δ) = k. Therefore,
Since the equality tr(δ) = ‖δ‖∞ rank(δ) holds, the non-null eigenvalues of δ are equal to ‖δ‖∞. Moreover, tr(δ) = k‖δ‖∞ = 1. Hence
Since ℛ(δ) = δ, the non-null singular values of ℛ(δ) are the non-null eigenvalues of δ, which are equal in this case. Hence the non-null Schmidt coefficients of the Schmidt decomposition of δ are equal by Lemma 3. We know that every invariant under realignment state with equal non-null Schmidt coefficients is separable by [5, Proposition 15]. So δ is separable and so is γ, since R is invertible.
For our final results, we need some tools developed in Sections 2 and 3 together with the complete reducibility property. First, we show that the rank of a PPT state is greater or equal to its reduced ranks in the next lemma.
Let γ ∈ ℳk ⊗ ℳm be a PPT state. Then rank(γ) ≥ max{rank(γA), rank(γB)}.
Let us assume without loss of generality that max{rank(γA), rank(γB)} = rank(γA) = k. So there is an invertible matrix R ∈ ℳk such that
Since δ is PPT, by Theorem 1,
Thus, rank(γ) = rank(δ) ≥ max{rank(γA), rank(γB)}.
We complete this paper by proving that a PPT state with minimal rank must be separable which is the PPT counterpart of the Theorems 4 and 5.
If δ ∈ ℳk ⊗ ℳk is a PPT state such that rank(δ) = rank(δA) = rank(δB) = k, then δ is separable.
This result is trivial in ℳ2 ⊗ ℳ2, since every PPT state there is separable. Assume the result is true in ℳi ⊗ ℳi for i < k. Let us prove the result in ℳk ⊗ ℳk. First of all, by Theorem 3, there are invertible matrices M, N ∈ ℳk such that γ = (M ⊗N )δ(M* ⊗N*) satisfies
Next, by Theorem 1, since
Now, since γ is a positive semidefinite Hermitian matrix, the linear transformations Fγ and Gγ are positive maps and adjoints with respect to the trace inner product. In addition, notice that
By [24, Theorem 2.3.7], the spectral radius of the positive operator Fγ ∘ Gγ is
Next, since γ has k linearly independent eigenvectors associated to
Notice that there are R, S ∈ ℳk with rank m such that
Therefore, all the inequalities above are equalities, which imply
Gγ (RR*) = λSS* for some λ > 0, since Gγ is a positive map, and
Hence
Therefore,
Since γ is PPT, by the complete reducibility property,
The proof is almost done, we just need to check that (V ⊗ W )γ(V ⊗ W ), (V⊥ ⊗ W⊥)γ(V⊥ ⊗ W⊥) are multiples of states satisfying the hypotheses and since rank(V ) = rank(W ) < k and rank(V⊥) = rank(W⊥) < k, the result follows by induction.
Notice that by Equation (9) and the definition of Gγ,
Next, recall that V + V⊥ = W + W⊥ = Id, V V⊥ = W W⊥ = 0 and
Therefore
Now, define
Notice that
Thus, max{rank((γ1)A), rank((γ1)B)} = max{rank(V ), rank(W )} = max{rank(R), rank(S)} = m.
Moreover, notice that
Therefore max{rank((γ2)A), rank((γ2)B)} = max{rank(V⊥), rank(W⊥)} = k − m.
By their definitions, γ1 and γ2 are PPT. So, by Lemma 8, rank(γ1) ≥ m and rank(γ2) ≥ k − m.
Recall that k = rank(γ) = rank(γ1) + rank(γ2) ≥ m + (k − m). Thus rank(γ1) = m and rank(γ2) = k − m.
Since
γ1 has m eigenvalues equal to
and the others 0,{1 \over m} γ2 has k − m eigenvalues equal to
and the others 0.{1 \over {k - m}}
Hence,
γ1 has m eigenvalues equal to
,{1 \over m} ,{({\gamma _1})_A} = {1 \over m}V and rank(V ) = rank(W ) = m.{({\gamma _1})_B} = {1 \over m}W γ2 has k − m eigenvalues equal to
,{1 \over {k - m}} ,{({\gamma _2})_A} = {1 \over {k - m}}{V^ \bot } and rank(V⊥) = rank(W⊥) = k − m.{({\gamma _1})_B} = {1 \over {k - m}}{W^ \bot }
By induction hypothesis, γ1 and γ2 are separable and so is γ and δ.
In this paper we proved new results for a triad of quantum state types which includes the positive under partial transpose type. We obtained the same upper bound for the spectral radius of these types of quantum states. We then showed that the first two types can be put in the filter normal form and the third type only under some restriction. Finally, we proved that there is a lower bound for their ranks and whenever this lower bound is attained these states are separable. This last result is another consequence of their complete reducibility property. This is sufficient evidence that these states are deeply connected. Moreover, their complete reducibility property is a unifying force connecting and providing many results in entanglement theory.