Have a personal or library account? Click to login
A Triality Pattern in Entanglement Theory Cover
By: Daniel Cariello  
Open Access
|Nov 2024

Full Article

1.
Introduction

The separability problem in quantum theory [1] asks for a criterion to distinguish the separable states from the entangled states. It is known that separable states in ℳk ⊗ ℳm form a subset of the positive under partial transpose states (PPT states) and in low dimensions, km ≤ 6, these two sets coincide [2,3] solving the problem. However, in larger dimensions, km > 6, there are entangled PPT states. In addition, for k, m arbitrary, this problem is known to be a hard problem [4], thus any reduction of this problem to a subset of PPT states of ℳk ⊗ ℳm is certainly important.

In [5], a procedure to reduce the separability problem to a proper subset of PPT states was presented. The idea behind this reduction can be summarized as follows. Let A=i=1nAiBiMkMm A = \sum\nolimits_{i = 1}^n {A_i} \otimes {B_i} \in {{\cal M}_k} \otimes {{\cal M}_m} be a quantum state and consider GA : ℳk → ℳm and FA : ℳm → ℳk as GA(X)=i=1ntr(AiX)Bi {G_A}(X) = \sum\nolimits_{i = 1}^n tr({A_i}X){B_i} and FA(X)=i=1ntr(BiX)Ai {F_A}(X) = \sum\nolimits_{i = 1}^n tr({B_i}X){A_i} , where tr(X) stands for the trace of X.

Now, if A is PPT and X is a positive semidefinite Hermitian eigenvector of FAGA : ℳk → ℳk, then A decomposes as a sum of states with orthogonal local supports [5, Lemma 8], i.e., A=(VW)A(VW)+(VW)A(VW), A = (V \otimes W)A(V \otimes W) + ({V^ \bot } \otimes {W^ \bot })A({V^ \bot } \otimes {W^ \bot }), where V, W, V, W are orthogonal projections onto Im(X), Im(GA(X)), ker(X) and ker(GA(X)), respectively.

Then the algorithm proceeds to decompose (VW)A(VW ) and (VW)A(VW), since they are also PPT, whenever such X is found. Eventually this process stops with A=i=1n(ViWi)A(ViWi), A = \sum\nolimits_{i = 1}^n ({V_i} \otimes {W_i})A({V_i} \otimes {W_i}), where (ViWi)A(ViWi) cannot be further decomposed for 1 ≤ in. These states are named weakly irreducible. Finally, A is separable if and only if each (ViWi)A(ViWi) is separable, therefore this algorithm reduces the separability problem to the weakly irreducible PPT case.

Positive under partial transpose states are not the only type of states on which this procedure works because the key feature of this procedure, which is A ∈ ℳk ⊗ ℳm breaks whenever a certain positive semidefinite Hermitian eigenvector is found, is also true for two other types of quantum states. From now on we say that A ∈ ℳk ⊗ ℳm has the complete reducibility property, if for every positive semidefinite Hermitian eigenvector X of FAGA : ℳk → ℳk, we have A=(VW)A(VW)+(VW)A(VW), A = (V \otimes W)A(V \otimes W) + ({V^ \bot } \otimes {W^ \bot })A({V^ \bot } \otimes {W^ \bot }), where V, W, V, W are orthogonal projections onto Im(X), Im(GA(X)), ker(X) and ker(GA(X)), respectively. In [5], a search for other types of quantum states satisfying this property was conducted finding only three types of quantum states: positive under partial transpose states (PPT states), symmetric with positive coefficients states (SPC states) and invariant under realignment states. So far this property has been only verified for this triad of quantum states.

It is not hard to compute a single positive semidefinite Hermitian eigenvector for FAGA : ℳk → ℳk, when A is positive semidefinite. In the next section, we shall see that that under this hypothesis GA : ℳk → ℳm and FA : ℳm → ℳk are adjoint positive maps. So if λ is the largest positive eigenvalue of FAGA, then P=limnFAGAλn P = \mathop {\lim }\nolimits_{n \to \infty } {\left( {{{{F_A} \circ {G_A}} \over \lambda }} \right)^n} is the orthogonal projection onto the eigenspace of FAGA associated to λ. As a limit of positive maps, P is also a positive map. Therefore P (Id) is a positive semidefinite Hermitian matrix belonging to the eigenspace of FAGA associated to λ.

Now, the number of times that A breaks into weakly irreducible pieces is maximized when the non-null eigenvalues of FAGA : ℳk → ℳk are equal, because in this case we are able to produce many positive semidefinite Hermitian eigenvectors. In this situation, we have the following theorem:

If the non-null eigenvalues of FAGA are equal, then A is separable, when A is PPT or SPC or invariant under realignment [5, Proposition 15]. Notice that the eigenvalues of FAGA are the square of the Schmidt coefficients of A.

In [5, Lemma 42], it was also noticed that every SPC state and every invariant under realignment state is PPT in ℳ2 ⊗ ℳ2. Before that in [6], it was proved that a state supported on the symmetric subspace of ℂk ⊗ ℂk is PPT if and only if it is SPC. This is plenty of evidence of how linked these three types of quantum states are.

In this work we prove new results for this triad of quantum states and a new consequence for their complete reducibility property. One of these results concerns the separability of these states. The aforementioned connections along with our new results lead us to notice a certain triality pattern:

For each proven theorem for one of these three types of states, there are corresponding counterparts for the other two.

We believe that a solution to the separability problem for SPC states or invariant under realignment states would shed light on bound entanglement. This is our reason for studying these types of states.

Next, we would like to point out the origin of these connections which is also the source of the main tools used in this work. For this, we must consider the group of linear contractions [7] and some of its properties. For each permutation σS4, the linear transformation Lσ : ℳk ⊗ ℳk → ℳk ⊗ ℳk satisfying Lσ(v1v2tv3v4t)=vσ(1)vσ(2)tvσ(3)vσ(4)t, {L_\sigma }({v_1}v_2^t \otimes {v_3}v_4^t) = {v_{\sigma (1)}}v_{\sigma (2)}^t \otimes {v_{\sigma (3)}}v_{\sigma (4)}^t, where vi ∈ ℂk is called a linear contraction.

The term contraction comes from the fact that ‖Lσ(γ)‖1 ≤ ‖γ1, for every σS4, whenever γ ∈ ℳk ⊗ ℳk is a separable state and ‖ ⋅ ‖1 is the trace norm of a matrix (i.e., the sum of its singular values). Hence, if γ ∈ ℳk ⊗ ℳkis a state and ‖Lσ(γ)‖1 > ‖γ1 for some σS4, then γ ∈ ℳk ⊗ ℳk is entangled. This observation provides two useful criteria for entanglement detection. In cycle notation, they are:

  • the PPT criterion [2,3], when σ = (34), and

  • the CCNR criterion [8,9], when σ = (23).

Despite the name – contraction – these linear maps are in fact isometries, as they map an orthonormal basis { {eiejtekelt,1i,j,k,lk} \{ {e_i}e_j^t \otimes {e_k}e_l^t,\;1 \le i,j,k,l \le k\} of ℳk ⊗ℳk onto itself. Hence, they preserve the Frobenius norm of γ ∈ ℳk ⊗ ℳk, denoted here by ‖γ2. This norm is an invariant exploited several times in this work.

In addition, the set of linear contractions is a group under composition generated by the partial transposes (L(34)(γ) = γΓ and L(12)) and the realignment map (L(23)(γ) = ℛ(γ)). The relations among the elements of this group are extremely useful as we shall see in the proofs of our novel results.

Another very useful fact about the realignment map is that this map is a homomorphism with respect to the usual matrix product and a new product which generalizes the Hadamard product (see item 8) of Lemma 1).

Finally, these maps allow connecting this triad of quantum states from their origins, that is, their definitions:

  • PPT states are the states that remain positive under partial transpose (γ ≥ 0, γΓ ≥ 0) [2,3],

  • SPC states are the states that remain positive under partial transpose composed with realignment (γ ≥ 0, ℛ(γΓ) ≥ 0) ([5, corollary 25] and [5, definition 17]),

  • invariant under realignment states are the states that remain the same under realignment (γ ≥ 0, ℛ(γ) = γ).

These observations on the group of linear contractions are used throughout this paper; they are the keys to obtain our novel results. Now, let us describe these results.

Our first result is an upper bound on the spectral radius for this triad of states. We show that if γ=i=1mCiDiMkMk \gamma = \sum\nolimits_{i = 1}^m {C_i} \otimes {D_i} \in {{\cal M}_k} \otimes {{\cal M}_k} is PPT or SPC or invariant under realignment, then γmin{γA,γB,R(γ)}, {\left\| \gamma \right\|_\infty } \le \min \{ {\left\| {\gamma _A}\right\|_\infty },{\left\| {\gamma _B}\right\|_\infty },{\left\| {\cal R}(\gamma )\right\|_\infty }\} , where ‖.‖ is the operator norm, and γA=i=1mCitr(Di) {\gamma _A} = \sum\nolimits_{i = 1}^m {C_i}tr({D_i}) and γB=i=1mDitr(Ci) {\gamma _B} = \sum\nolimits_{i = 1}^m {D_i}tr({C_i}) are the reduced states. Let us say that the ranks of γA and γB are the reduced ranks of γ.

Our second result regards the filter normal form of SPC states, invariant under realignment states and certain types of PPT states. This normal form has been used in entanglement theory, for example, to provide a different solution for the separability problem in ℳ2 ⊗ ℳ2 [10,11,12] or to prove the equivalence of some criteria for entanglement detection [13]. Here we show that states which are SPC or invariant under realignment can be put in the filter normal form and their normal forms can still be chosen to be SPC and invariant under realignment, respectively. In other words, if A, B ∈ ℳk ⊗ ℳk are states such that

  • (1)

    A is SPC then there is an invertible matrix R ∈ ℳk such that (RR)A(RR)*=i=1nλiδiδi (R \otimes R)A{(R \otimes R)^*} = \sum\nolimits_{i = 1}^n {\lambda _i}{\delta _i} \otimes {\delta _i} , where λ1=1k {\lambda _1} = {1 \over k} , δ1=Idk {\delta _1} = {{Id} \over {\sqrt k }} , λi > 0 and δi is Hermitian for every i, and tr(δiδj) = 0 for ij.

  • (2)

    B is invariant under realignment then there is an invertible matrix R ∈ ℳk such that (RR¯)B(RR¯)*=i=1nλiδiδi¯ (R \otimes \overline R )B{(R \otimes \overline R )^*} = \sum\nolimits_{i = 1}^n {\lambda _i}{\delta _i} \otimes \overline {{\delta _i}} , where λ1=1k {\lambda _1} = {1 \over k} , δ1=Idk {\delta _1} = {{Id} \over {\sqrt k }} , λi > 0 and δi is Hermitian for every i, and tr(δiδj) = 0 for ij.

Our PPT counterpart to these two results, Theorem 3, says that every PPT state whose rank is equal to its two reduced ranks can be put in the filter normal form, which we believe is a novel contribution to the filter normal form literature.

We are not aware of any general algorithm capable of determining whether a given state can be put in the filter normal form or not. From this point of view, these results are not only novel but are highly relevant to the literature on the filter normal form.

Our final result is a lower bound for the ranks of these three types of states. We show that the rank of PPT states, SPC states and invariant under realignment states cannot be inferior to their reduced ranks (when they are equal) and whenever this minimum is attained the states are separable.

In [14], it was proved that a state γ such that rank(γ) ≤ max{rank(γA), rank(γB)} is separable by just being positive under partial transpose. In [15], it was shown that the rank of a separable state is greater or equal to its reduced ranks. So if γ is PPT and rank(γ) ≤ max{rank(γA), rank(γB)}, then it is separable and rank(γ) = max{rank(γA), rank(γB)}.

Thus, our last result is known for PPT states, but it is original for SPC states and invariant under realignment states. The proofs presented here for these results are completely original. We use the facts that every PPT state, SPC state and invariant under realignment state with minimal rank can be put in the filter normal form together with the complete reducibility property to finish the proofs. These results show a nice interplay between the filter normal form and the complete reducibility property.

It is quite surprising that all three of these types of states guarantee separability of their states under such condition. In the entanglement literature, the opposite is usually the case.

For example, it is known that any given bipartite state γ satisfies rank(γ)max{rank(γA),rank(γB)}min{rank(γA),rank(γB)} {\rm{rank}}(\gamma ) \ge {{\max \{ {\rm{rank}}({\gamma _A}),{\rm{rank}}({\gamma _B})\} } \over {\min \{ {\rm{rank}}({\gamma _A}),{\rm{rank}}({\gamma _B})\} }} (see [16, Theorem 3.3]) and, whenever the equality holds, this γ becomes entangled (see [17, Theorem 1]). From this point of view, states that reach the minimum possible rank value are usually entangled.

The results described above show how fundamental the complete reducibility property is to entanglement theory, it acts as a unifying approach for many results. Another recent one on entanglement breaking Markovian dynamics was described in [18]. In fact, even outside entanglement theory, we can find consequences of that property, for example, a new proof of Weiner’s theorem [19] on mutually unbiased bases found in [5].

The triality pattern described above together with the complete reducibility property form a potential source of information for entanglement theory.

This paper is organized as follows. In Section 2, we present some preliminary results, which are mainly facts about the group of linear contractions. In Section 3, we obtain an upper bound for the spectral radius of our special triad of quantum states. In Section 4, we show that SPC states and invariant under realignment states can be put in the filter normal form and their normal forms retain their shape. In addition, we show that PPT states with minimal rank can also be put in the filter normal form. In Section 5, we prove that the ranks of our triad of quantum states cannot be inferior to their reduced ranks and whenever this minimum is attained the states are separable.

2.
Preliminary Results

In this section we present some preliminary results. We begin by noticing that GA : ℳk → ℳm and FA : ℳm → ℳk defined in the introduction are adjoint maps with respect to the trace inner product, when A is Hermitian. The reason behind this is quite simple: If A is Hermitian, then FA(Y )* = FA*(Y*) = FA(Y*), for every Y ∈ ℳm, hence tr(GA(X)Y*)=tr(A(XY*))=tr(XFA(Y*))=tr(XFA(Y)*). tr({G_A}(X){Y^*}) = tr(A(X \otimes {Y^*})) = tr(X{F_A}({Y^*})) = tr(X{F_A}{(Y)^*}).

Notice that for positive semidefinite Hermitian matrices X ∈ ℳk, Y ∈ ℳm and A ∈ ℳk ⊗ ℳm, we have tr(A(XY*)) ≥ 0. Thus, the equality above also shows that GA and FA are positive maps (Definition 2) when A is positive semidefinite. So in this case FAGA : ℳk → ℳk is a self-adjoint positive map.

These maps GA : ℳk → ℳm and FA : ℳm → ℳk are connected to the following generalization of the Hadamard product extensively used in [20] and required here a few times.

Definition 1

(Generalization of the Hadamard product). Let γ=i=1nAiBiMmMk \gamma = \sum\nolimits_{i = 1}^n {A_i} \otimes {B_i} \in {{\cal M}_m} \otimes {M_k} , δ=j=1lCjDjMkMs \delta = \sum\nolimits_{j = 1}^l {C_j} \otimes {D_j} \in {{\cal M}_k} \otimes {M_s} . Define the product γ * δ ∈ ℳm ⊗ ℳs as γ*δ=i=1nj=1lAiDjtr(BiCjt). \gamma *\delta = \sum\limits_{i = 1}^n \sum\limits_{j = 1}^l {A_i} \otimes {D_j}tr({B_i}C_j^t).

Let us recall some facts regarding this product.

Remark 1

Let u=i=1keieikk u=\sum\nolimits_{i=1}^{k}{}{{e}_{i}}\otimes {{e}_{i}}\in {{\mathbb{C}}^{k}}\otimes {{\mathbb{C}}^{k}} , where e1, …, ek is the canonical basis ofk.

  • a)

    Notice that ut(BiCj)u=tr((BiCj)uut)=tr(BiCjt) {u^t}({B_i} \otimes {C_j})u = tr(({B_i} \otimes {C_j})u{u^t}) = tr({B_i}C_j^t) , where Bi, Cj ∈ ℳk. Therefore, γ*δ=(Idm×mutIds×s)(γδ)(Idm×muIds×s), \gamma *\delta = (I{d_{m \times m}} \otimes {u^t} \otimes I{d_{s \times s}})(\gamma \otimes \delta )(I{d_{m \times m}} \otimes u \otimes I{d_{s \times s}}), which implies that γ * δ is positive semidefinite, whenever γ, δ are positive semidefinite. In addition, tr(γ*δ)=tr(γδ(IduutId))=tr(γBδAuut)=tr(γBδAt) tr(\gamma *\delta ) = tr(\gamma \otimes \delta \;(Id \otimes u{u^t} \otimes Id)) = tr({\gamma _B} \otimes {\delta _A}\;u{u^t}) = tr({\gamma _B}\delta _A^t) .

  • b)

    By [20, Proposition 8], γ * δ = (Fγ ((⋅)t) ⊗ Id)(δ) = (IdGδ((⋅)t))(γ).

  • c)

    Let F = (uut)Γ ∈ ℳk ⊗ ℳk and notice that if γ=i=1nAiBiMkMk \gamma = \sum\nolimits_{i = 1}^n {A_i} \otimes {B_i} \in {{\cal M}_k} \otimes {{\cal M}_k} , then FγF=i=1nBiAi F\gamma F = \sum\nolimits_{i = 1}^n {B_i} \otimes {A_i} . This real symmetric matrix F is a unitary matrix and it is usually called the flip operator.

Now, notice that γ*(FγtF)=jiAiAjttr(BiBj) \gamma *(F{\gamma ^t}F) = \sum\nolimits_j \sum\nolimits_i {A_i} \otimes A_j^ttr({B_i}{B_j}) . Hence tr(γ*(FγtF))=tritr(Ai)Bijtr(Aj)Bj=tr(γB2)andGγ*(FγtF)(X)=jitr(AiX)tr(BiBj)Ajt=jtritr(AiX)BiBjAjt=Fγ(Gγ(X))t. \matrix{ {tr(\gamma *(F{\gamma ^t}F)) = tr\left( {\left( {\sum\nolimits_i tr({A_i}){B_i}} \right)\left( {\sum\nolimits_j tr({A_j}){B_j}} \right)} \right) = tr(\gamma _B^2)\;and} \cr {{G_{\gamma *(F{\gamma ^t}F)}}(X) = \sum\nolimits_j \sum\nolimits_i tr({A_i}X)tr({B_i}{B_j})A_j^t = \sum\nolimits_j tr\left( {\left( {\sum\nolimits_i tr({A_i}X){B_i}} \right){B_j}} \right)A_j^t = {F_\gamma }{{({G_\gamma }(X))}^t}.} \cr }

Next, we discuss some facts about the group of linear contractions. Actually, we need to focus only on three of these maps. The linear contractions important to us are L(34)(γ)=γΓ,L(24)(γ)=γFandL(23)(γ)=R(γ). {L_{(34)}}(\gamma ) = {\gamma ^\Gamma },{L_{(24)}}(\gamma ) = \gamma F\;{\rm{and}}\;{L_{(23)}}(\gamma ) = {\cal R}(\gamma ).

Below we discuss several properties of these linear contractions such as relations among these elements and how they behave with respect to the product defined in Definition 1.

Lemma 1

Let γ, δ ∈ ℳk ⊗ ℳk and v = Σi aibi, w = Σj cjdj ∈ ℂk ⊗ ℂk.

  • (1)

    ℛ(vwt) = VW, where V=iaibit V = \sum\nolimits_i {a_i}b_i^t , W=jcjdjt W = \sum\nolimits_j {c_j}d_j^t .

  • (2)

    ℛ(ℛ(γ)) = γ

  • (3)

    ℛ((VW )γ(MN )) = (VMt)ℛ(γ)(WtN)

  • (4)

    ℛ(γF )F = γΓ

  • (5)

    ℛ(γΓ) = ℛ(γ)F

  • (6)

    ℛ(γF ) = ℛ(γ)Γ

  • (7)

    ℛ(γΓ)Γ = γF

  • (8)

    ℛ(γ * δ) = ℛ(γ)ℛ(δ) (i.e., ℛ is a homomorphism).

  • (9)

    R(Fγ¯F)=R(γ)* {\cal R}(F\overline \gamma F) = {\cal R}{(\gamma )^*}

Proof

Items (1–6) were proved in items (2–7) of [5, Lemma 23]. For the other three items, we just need to prove them when γ = abtcdt and δ = eftght, where a, b, c, d, e, f, g, h ∈ ℂk.

Item (7): ℛ(abtcdt)Γ)Γ = ℛ(abtdct)Γ = (adtbct)Γ = adtcbt.

  Now, (abtcdt)F = adtcbt. So ℛ(γΓ)Γ = γF.

Item (8): ℛ(abtcdt * eftght) = ℛ(abtght)(dtf)(cte) = (agtbht)(dtf)(cte).

  Now, ℛ(abtcdt)ℛ(eftght) = (actbdt)(egtfht) = (agtbht)(cte)(dtf).

  So ℛ(γ * δ) = ℛ(γ)ℛ(δ)

Item (9): R(Fa¯b¯tc¯d¯tF)=R(c¯d¯ta¯b¯t)=c¯a¯td¯b¯t {\cal R}(F\overline a {\overline b ^t} \otimes \overline c {\overline d ^t}F) = {\cal R}(\overline c {\overline d ^t} \otimes \overline a {\overline b ^t}) = \overline c {\overline a ^t} \otimes \overline d {\overline b ^t}

  Now, R(abtcdt)*=(actbdt)*=c¯a¯td¯b¯t {\cal R}{(a{b^t} \otimes c{d^t})^*} = {(a{c^t} \otimes b{d^t})^*} = \overline c {\overline a ^t} \otimes \overline d {\overline b ^t} .

  So R(Fγ¯F)=R(γ)* {\cal R}(F\overline \gamma F) = {\cal R}{(\gamma )^*} .

The next lemma is important for Theorem 3, it says something very interesting about PPT states which remain PPT, under realignment: They must be invariant under realignment.

Lemma 2

Let γ ∈ ℳk ⊗ ℳk be a positive semidefinite Hermitian matrix. If γ and ℛ(γ) are PPT, then γ = ℛ(γ).

Proof

By item (4) of Lemma 1, γΓ = ℛ(γF )F.

Now, since F2 = Id, γΓF = ℛ(γF ).

Next, by item (6) of Lemma 1, ℛ(γF ) = ℛ(γ)Γ. So γΓF = ℛ(γ)Γ is a positive semidefinite Hermitian matrix by hypothesis.

Since F, γΓ and γΓF are Hermitian matrices, γΓF = Γ. So there is an orthonormal basis of ℂk ⊗ ℂk formed by symmetric and anti-symmetric eigenvectors of γΓ.

Remind that γΓ and γΓF are positive semidefinite, hence γΓ = γΓF.

Finally, we have noticed that γΓF = ℛ(γ)Γ. So γΓ = ℛ(γ)Γ, which implies γ = ℛ(γ).

The next lemma is used in this work a few times. Although simple, we state it here in order to better organize our arguments.

Lemma 3

Let γ ∈ ℳk ⊗ ℳk. The singular values of the linear map Gγ : ℳk → ℳk and the matrix ℛ(γ) ∈ ℳk ⊗ ℳk are equal and their largest singular values can be computed as Gγ=R(γ)=max{|tr(γ(XY))|,X2=Y2=1}. {\left\| {G_\gamma }\right\|_\infty } = {\left\| {\cal R}(\gamma )\right\|_\infty } = \max \{ |tr(\gamma (X \otimes Y))|,\;{\left\| X\right\|_2} = {\left\| Y\right\|_2} = 1\} .

In addition, if γ is Hermitian, then there are Hermitian matrices γ1, δ1 ∈ ℳk such that ‖ℛ(γ)‖ = tr(γ(γ1δ1)), whereγ12 = ‖δ12 = 1.

Proof

It was proved in item 1) of [5, Lemma 23] that VFγ*V*=R((γ*)Γ), V \circ {F_{{\gamma ^*}}} \circ {V^*} = {\cal R}({({\gamma ^*})^\Gamma }), where V : ℳk → ℂk ⊗ ℂk is the vectorization map Viaibit=iaibi V\left( {\sum\nolimits_i {a_i}b_i^t} \right) = \sum\nolimits_i {a_i} \otimes {b_i} . Since the vectorization map is an isometry, the singular values of Fγ*: ℳk → ℳk and ℛ((γ*)Γ) ∈ ℳk ⊗ ℳk are equal.

Now, by item 5) of Lemma 1, ℛ((γ*)Γ) = ℛ(γ*)F, where F is the flip operator, which is unitary. Hence the singular values of ℛ((γ*)Γ) and ℛ(γ*) are the same.

Next, items 2) and 9) of Lemma 1 imply that R(γ*)=FR(γ)¯F {\cal R}({\gamma ^*}) = F\overline {{\cal R}(\gamma )} F . Again, since F is unitary ℛ(γ*) and ℛ(γ) share their singular values. Hence the singular values of Fγ*: ℳk → ℳk and ℛ(γ) ∈ ℳk ⊗ ℳk are equal.

In addition, Gγ : ℳk → ℳk is the adjoint of Fγ*: ℳk → ℳk, thus Gγ and Fγ* have the same singular values. Therefore, the singular values of the linear map Gγ : ℳk → ℳk and the matrix ℛ(γ) ∈ ℳk ⊗ ℳk are equal. Thus, ‖Gγ = ‖ℛ(γ)‖.

In order to show that their largest singular values can be computed as stated above, recall that the largest singular value of Gγ : ℳk → ℳk is equal to max{(Gγ(W)2,W2=1}=max{|tr(Gγ(W)V)|,W,VMkandW2=V2=1},=max{|tr(γ(WV))|,W,VMkandW2=V2=1}. \matrix{ {\max \{ {\left\| ({G_\gamma }(W)\right\|_2},\;{\left\| W\right\|_2} = 1\} = \max \{ |tr({G_\gamma }(W)V)|,\;W,V \in {{\cal M}_k}\;{\rm and}\;{\left\| W\right\|_2} = {\left\| V\right\|_2} = 1\} ,} \hfill \cr {\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; = \max \{ |tr(\gamma (W \otimes V))|,\;W,V \in {{\cal M}_k}\;{\rm and}\;{\left\| W\right\|_2} = {\left\| V\right\|_2} = 1\} .} \hfill \cr }

This proves the first part of the lemma. Now for the second part, if γ is Hermitian, then the set of Hermitian matrices is left invariant by Gγ : ℳk → ℳk. Therefore, there is an Hermitian matrix γ1 ∈ ℳk such that ‖γ12 = 1 and ‖Gγ(γ1)‖2 = the largest singular value of Gγ.

Notice that ‖Gγ (γ1)‖2 = tr(Gγ (γ1)δ1) = tr(γ(γ1δ1)), where δ1 = Gγ (γ1)/‖Gγ (γ1)‖2.

So ‖ℛ(γ)‖ = tr(γ(γ1δ1)), where ‖γ12 = ‖δ12 = 1 and γ1, δ1 are Hermitian matrices.

Now, we have all the preliminary results required to discuss our new results.

3.
An Upper Bound for the Spectral Radius of the Special Triad

In this section we obtain an upper bound for the spectral radius of PPT states, SPC states and invariant under realignment states (Theorem 1). In order to prove this theorem, two lemmas are required.

Our first lemma says that the largest singular value of the partial transpose of any state cannot exceed the largest singular values of its reduced states or of its own realignment.

Lemma 4

Let γ ∈ ℳk ⊗ ℳk be any positive semidefinite Hermitian matrix. Then γΓminγA,γB,R(γ). {\left\| {\gamma ^\Gamma }\right\|_\infty } \le \min \left\{ {{\left\| {\gamma _A}\right\|_\infty },{\left\| {\gamma _B}\right\|_\infty },{\left\| {\cal R}(\gamma )\right\|_\infty }} \right\}.

Proof

Let v ∈ ℂk ⊗ ℂk be a unit vector such that |tr(γΓvv*)| = ‖γΓ. Let n be its rank. Denote by {g1, …, gk} and {e1, …, en} the canonical bases of ℂk and ℂn, respectively.

Next, there are matrices D ∈ ℳk×k, E ∈ ℳk×k, R ∈ ℳk×n and S ∈ ℳk×n such that

  • (1)

    v = (DId)w, where tr(DD*) = 1, w=i=1kgigikk w=\sum\nolimits_{i=1}^{k}{}{{g}_{i}}\otimes {{g}_{i}}\in {{\mathbb{C}}^{k}}\otimes {{\mathbb{C}}^{k}} ,

  • (2)

    v = (IdE)w, where tr(E̅Et) = 1, w=i=1kgigikk w=\sum\nolimits_{i=1}^{k}{}{{g}_{i}}\otimes {{g}_{i}}\in {{\mathbb{C}}^{k}}\otimes {{\mathbb{C}}^{k}} ,

  • (3)

    v = (RS)u, where u=i=1neieinn u=\sum\nolimits_{i=1}^{n}{}{{e}_{i}}\otimes {{e}_{i}}\in {{\mathbb{C}}^{n}}\otimes {{\mathbb{C}}^{n}} and tr((RR∗)2) = tr((S̅St)2) = 1.

Now,

  • (1)

    γΓ = |tr(γΓvv*)| = |tr((D*Id)γ(DId)(ww*)Γ)| ≤ tr((D*Id)γ(DId)), since IdId ± (ww*)Γ and γ are positive semidefinite. Hence γΓtr(γ(DD*Id))=tr(γADD*)γAtr(DD*)=γA. {\left\| {\gamma ^\Gamma }\right\|_\infty } \le tr(\gamma (D{D^*} \otimes Id)) = tr({\gamma _A}D{D^*}) \le {\left\| {\gamma _A}\right\|_\infty }tr(D{D^*}) = {\left\| {\gamma _A}\right\|_\infty }.

  • (2)

    γΓ = |tr(γΓvv*)| = |tr((IdEt)γ(IdĒ)(ww*)Γ)| ≤ tr((IdEt)γ(IdĒ)), since IdId ± (ww*)Γ and γ are positive semidefinite. Hence γΓtr(γ(IdE¯Et))=tr(γBE¯Et)γBtr(E¯Et)=γB. {\left\| {\gamma ^\Gamma }\right\|_\infty } \le tr(\gamma (Id \otimes \overline E {E^t})) = tr({\gamma _B}\overline E {E^t}) \le {\left\| {\gamma _B}\right\|_\infty }tr(\overline E {E^t}) = {\left\| {\gamma _B}\right\|_\infty }.

  • (3)

    γΓ = |tr(γΓvv*)| = |tr((R*St)γ(RS̅)(uu*)Γ)| ≤ tr ((R*St)γ(RS̅)), since IdId ± (uu*)Γ and γ are positive semidefinite. Hence γΓtr(γ(RR*S¯St))R(γ),sinceRR*2=S¯St2=1,byLemma3. {\left\| {\gamma ^\Gamma }\right\|_\infty } \le tr(\gamma (R{R^*} \otimes \overline S {S^t})) \le {\left\| {\cal R}(\gamma )\right\|_\infty },{\rm{since}}{\left\| R{R^*}\right\|_2} = {\left\| \overline S {S^t}\right\|_2} = 1,\;{\rm{by}}\;{\rm{Lemma}}\;{\rm{3}}.

Our next lemma shows that the largest singular value of the realignment of any state cannot exceed the geometric mean of the largest singular values of its reduced states.

Lemma 5

Let γ ∈ ℳk ⊗ ℳk be a positive semidefinite Hermitian matrix. Then R(γ)2γAγB. {\left\| {\cal R}(\gamma )\right\|_\infty ^2} \le {\left\| {\gamma _A}\right\|_\infty }{\left\| {\gamma _B}\right\|_\infty }.

Proof

By Lemma 3, there are Hermitian matrices γ1 ∈ ℳk and δ1 ∈ ℳk such that

  • (1)

    tr(γ12)=tr(δ12)=1 tr(\gamma _1^2) = tr(\delta _1^2) = 1

  • (2)

    tr(γ(γ1δ1)) = ‖ℛ(γ)‖.

Consider the following positive semidefinite Hermitian matrix γ1200γ12Idδ1γ1IdIdδ1γ1Idγ1200γ12=γ12(Idδ12)γ12γ12(γ1δ1)γ12γ12(γ1δ1)γ12γ12(γ12Id)γ12. \matrix{ {\left( {\matrix{ {{\gamma ^{{1 \over 2}}}} & 0 \cr 0 & {{\gamma ^{{1 \over 2}}}} \cr } } \right)\left( {\matrix{ {Id \otimes {\delta _1}} \cr {{\gamma _1} \otimes Id} \cr } } \right)\left( {\matrix{ {Id \otimes {\delta _1}} & {{\gamma _1} \otimes Id} \cr } } \right)\left( {\matrix{ {{\gamma ^{{1 \over 2}}}} & 0 \cr 0 & {{\gamma ^{{1 \over 2}}}} \cr } } \right)} \cr { = \left( {\matrix{ {{\gamma ^{{1 \over 2}}}(Id \otimes \delta _1^2){\gamma ^{{1 \over 2}}}} & {{\gamma ^{{1 \over 2}}}({\gamma _1} \otimes {\delta _1}){\gamma ^{{1 \over 2}}}} \cr {{\gamma ^{{1 \over 2}}}({\gamma _1} \otimes {\delta _1}){\gamma ^{{1 \over 2}}}} & {{\gamma ^{{1 \over 2}}}(\gamma _1^2 \otimes Id){\gamma ^{{1 \over 2}}}} \cr } } \right).} \cr }

Its partial trace, D=tr(γ(Idkδ12))tr(γ(γ1δ1))tr(γ(γ1δ1))tr(γ(γ12Idk))2×2 D = {\left( {\matrix{ {tr(\gamma (I{d_k} \otimes \delta _1^2))} & {tr(\gamma ({\gamma _1} \otimes {\delta _1}))} \cr {tr(\gamma ({\gamma _1} \otimes {\delta _1}))} & {tr(\gamma (\gamma _1^2 \otimes I{d_k}))} \cr } } \right)_{2 \times 2}} is also positive semidefinite.

Thus 0det(D)=tr(γ(Idkδ12))tr(γ(γ12Idm))tr(γ(γ1δ1))2 0 \le {\rm{det}}(D) = tr(\gamma (I{d_k} \otimes \delta _1^2))tr(\gamma (\gamma _1^2 \otimes I{d_m})) - tr{(\gamma ({\gamma _1} \otimes {\delta _1}))^2} .

Notice that

  • tr(γ(Idkδ12))=tr(γBδ12)γBtr(δ12)=γB tr(\gamma (I{d_k} \otimes \delta _1^2)) = tr({\gamma _B}\delta _1^2) \le {\left\| {\gamma _B}\right\|_\infty }tr(\delta _1^2) = {\left\| {\gamma _B}\right\|_\infty } ,

  • tr(γ(γ12Idm))=tr(γAγ12)γAtr(γ12)=γA tr(\gamma (\gamma _1^2 \otimes I{d_m})) = tr({\gamma _A}\gamma _1^2) \le {\left\| {\gamma _A}\right\|_\infty }tr(\gamma _1^2) = {\left\| {\gamma _A}\right\|_\infty } ,

  • tr(γ(γ1δ1))2=R(γ)2 tr{(\gamma ({\gamma _1} \otimes {\delta _1}))^2} = {\left\| {\cal R}(\gamma )\right\|_\infty ^2} .

Hence R(γ)2γAγB {\left\| {\cal R}(\gamma )\right\|_\infty ^2} \le {\left\| {\gamma _A}\right\|_\infty }{\left\| {\gamma _B}\right\|_\infty } .

These lemmas imply the first new connection for our special triad of quantum states. This theorem says that the spectral radius of any state that is PPT or SPC or invariant under realignment cannot exceed the largest singular values of its reduced states or of its own realignment.

Theorem 1

Let γ ∈ ℳk ⊗ ℳk be a positive semidefinite Hermitian matrix. If γ is PPT or SPC or invariant under realignment, then γmin{γA,γB,R(γ)}. {\left\| \gamma \right\|_\infty } \le \min \{ {\left\| {\gamma _A}\right\|_\infty },{\left\| {\gamma _B}\right\|_\infty },{\left\| {\cal R}(\gamma )\right\|_\infty }\} .

Proof

First, let γ be a PPT state. Hence γΓ is also a state.

Notice that (γΓ)A = γA, (γΓ)B=γBt {({\gamma ^\Gamma })_B} = \gamma _B^t and, by lemma 3, ‖ℛ(γΓ)‖ = ‖ℛ(γ)‖.

By applying Lemma 4 on γΓ, we obtain γmin{(γΓ)A,(γΓ)B,R(γΓ)}=min{γA,γB,R(γ)}. {\left\| \gamma \right\|_\infty } \le \min \{ {\left\| {({\gamma ^\Gamma })_A}\right\|_\infty },{\left\| {({\gamma ^\Gamma })_B}\right\|_\infty },{\left\| {\cal R}({\gamma ^\Gamma })\right\|_\infty }\} = \min \{ {\left\| {\gamma _A}\right\|_\infty },{\left\| {\gamma _B}\right\|_\infty },{\left\| {\cal R}(\gamma )\right\|_\infty }\} .

So the proof of the PPT case is complete.

Next, if γ is SPC or invariant under realignment, then γA = γB or γA=γBt {\gamma _A} = \gamma _B^t by [5, Corollary 25]. Hence, by Lemma 5, ‖ℛ(γ)‖ ≤ ‖γA.

It remains to prove that ‖γ ≤ ‖ℛ(γ)‖, whenever γ is SPC or invariant under realignment. Notice that this inequality is trivial for matrices invariant under realignment. Thus, let γ be an SPC state.

As defined in the introduction, ℛ(γΓ) is positive semidefinite. Applying Lemma 4 on ℛ(γΓ), we obtain R(γΓ)ΓR(R(γΓ)). {\left\| {\cal R}{({\gamma ^\Gamma })^\Gamma }\right\|_\infty } \le {\left\| {\cal R}({\cal R}({\gamma ^\Gamma }))\right\|_\infty }.

Now, by items (7) and (2) of Lemma 1, ℛ(γΓ)Γ = γF and ℛ(ℛ(γΓ)) = γΓ, where F is the flip operator. Therefore γFγΓ. {\left\| \gamma F\right\|_\infty } \le {\left\| {\gamma ^\Gamma }\right\|_\infty }.

Finally, ‖γΓ ≤ ‖ℛ(γ)‖ by Lemma 4, and ‖γ = ‖γF, since F is unitary.

4.
Filter Normal Form for SPC States and Invariant under Realignment States

In this section we show that every SPC state and every invariant under realignment state can be put in the filter normal form. In addition their filter normal forms can still be chosen to be SPC and invariant under realignment, respectively (Theorem 2).

Then we show that every PPT state whose rank is equal to its two reduced ranks can be put in the filter normal form (see Theorem 3).

As described in the introduction, there are applications of this normal form in entanglement theory. Now, it has been noticed that this normal form is connected to an extension of Sinkhorn-Knopp theorem for positive maps [4,21]. This theorem concerns the existence of invertible matrices R, S such that R*T (SXS*)R is doubly stochastic for a positive map T (X) satisfying suitable conditions. So we start this section with some definitions and lemmas related to this theorem.

Let V ∈ ℳk be an orthogonal projection and consider the sub-algebra of ℳk : VkV = {V XV, X ∈ ℳk}. Let Pk denote the set of positive semidefinite Hermitian matrices of ℳk.

Definition 2

Let us say that T : VkVVkV is a positive map if T (X) ∈ PkVkV for every XPkVkV. In addition, we say that a positive map T : VkVVkV is doubly stochastic if the following equivalent conditions hold:

  • (1)

    the matrix Am×m, defined as Aij=tr(T(vivi*)wjwj*) {A_{ij}} = tr(T({v_i}v_i^*){w_j}w_j^*) , is doubly stochastic for every choice of orthonormal bases v1, …, vm and w1, …, wm of Im(V),

  • (2)

    T (V ) = T*(V ) = V, where T* : VkVVkV is the adjoint of T : VkVVkV with respect to the trace inner product.

Definition 3

A positive map T : VkVVkV is said to be fully indecomposable if the following equivalent conditions hold:

  • (1)

    the matrix Am×m, defined as Aij=tr(T(vivi*)wjwj*) {A_{ij}} = tr(T({v_i}v_i^*){w_j}w_j^*) , is fully indecomposable [22] for every choice of orthonormal bases v1, …, vm and w1, …, wm of Im(V ),

  • (2)

    rank(X) + rank(Y ) < rank(V ), whenever X, Y ∈ (VkVPk) ∖ {0}and tr(T (X)Y ) = 0,

  • (3)

    rank(T (X)) > rank(X), ∀XVkVPk such that 0 < rank(X) < rank(V).

Below we prove two lemmas concerning self-adjoint maps with respect to the trace inner product.

Lemma 6

Let T : VkVVkV be a fully indecomposable self-adjoint map. There is RVkV such that R*T (R(⋅)R*)R : VkVVkV is doubly stochastic.

Proof

Since T : VkVVkV is fully indecomposable, it has total support [21, Lemma 2.3] or it has a positive achievable capacity [4]. So there are matrices A, BVkV such that rank(A) = rank(B) = rank(V ) and T1(X) = B*T (AXA*)B is doubly stochastic [21, Theorem 3.7]. Notice that T1 still is fully indecomposable.

Now, T2(X)=T1*(X)=A*T(B()B*)A {T_2}(X) = T_1^*(X) = {A^*}T(B( \cdot ){B^*})A is also doubly stochastic and (1) T2(X)=C*T1(DXD*)C, {T_2}(X) = {C^*}{T_1}(DX{D^*})C, where C = B+A, D = A+B and Y+ is the pseudo-inverse of Y.

Let C = EF G* and D = HLJ* be the SVD decompositions of C and D, where

  • (1)

    E = (e1, …, em), G = (g1, …, gm), H = (h1, …, hm), J = (j1, …, jm) ∈ ℳk×m and the columns of each of these matrices form an orthonormal basis of Im(V).

  • (2)

    F = diagonal(f1, …, fm), L = diagonal(l1, …, lm) and fi > 0, li > 0 for every i.

Next, define R, S ∈ ℳm×m as Rik=tr(T2(jiji*)gkgk*)andSik=tr(T1(hihi*)ekek*). {R_{ik}} = tr({T_2}({j_i}j_i^*){g_k}g_k^*)\;{\rm{and}}\;{S_{ik}} = tr({T_1}({h_i}h_i^*){e_k}e_k^*).

By Equation (1), Rik=li2fk2Sik {R_{ik}} = l_i^2f_k^2\;{S_{ik}} , i.e., R = L2 SF2.

Thus, L2, F2 are positive diagonal matrices such that L2SF2 is doubly stochastic by Definition 2. Recall that S is a fully indecomposable matrix by Definition 3.

Since S is fully indecomposable, by a theorem proved in [23], the diagonal matrices L2 and F2 such that L2SF2 is doubly stochastic must be unique up to multiplication by positive numbers, but Id.S.Id is also doubly stochastic.

Thus, L = a−2Id and F = a2Id for some a > 0.

Therefore, B+A = C = a2U, where U = EG*. Notice that U V U* = V.

In addition, BB+A = a2BU. Since BB+ = V and V A = A, we obtain A = a2BU.

Thus, B*T(A()A*)B=B*T((a2B)U()U*(a2B)*)B=(aB)*T((aB)U()U*(aB)*)(aB). {B^*}T(A( \cdot ){A^*})B = {B^*}T(({a^2}B)U( \cdot ){U^*}{({a^2}B)^*})B = {(aB)^*}T((aB)U( \cdot ){U^*}{(aB)^*})(aB).

Finally, (aB)*T ((aB)(⋅)(aB)*)(aB) is doubly stochastic too, since V = U V U*.

Lemma 7

Let T : VkVVkV be a self-adjoint positive map such that v ∉ ker(T (vv*)) for every vIm(V)\{0} v \in {\rm{Im}}(V)\backslash \{ \vec 0\} . Then there is RVkV such that R*T (R(⋅)R*)R : VkVVkV is doubly stochastic.

Proof

This proof is an induction on the rank(V).

If rank(V) = 1, then VkV = {λvv*, λ ∈ ℂ}. Thus, T (vv*) = μvv*, where μ > 0 by hypothesis.

Define R=1μ4vv* R = {1 \over {\root 4 \of \mu }}v{v^*} . So R*T (Rvv*R*)R = vv*. Thus, R*T (R(⋅)R*)R : VkVVkV is a self-adjoint doubly stochastic map.

Let rank(V ) = n > 1 and assume the validity of this theorem whenever the rank of the orthogonal projection is less than n.

Consider all pairs of orthogonal projections (V1, W1) such that V1,W1VMkV\{0}and0=tr(T(V1)W1). {V_1},{W_1} \in V{{\cal M}_k}V\backslash \{ 0\} \;{\rm{and}}\;0 = tr(T({V_1}){W_1}).

Since there is no vIm(V)\{0} v \in {\rm{Im}}(V)\backslash \{ \vec 0\} such that tr(T(vv*)vv*) = 0, Im(V1)Im(W1)={0} {\rm{Im}}({V_1}) \cap {\rm{Im}}({W_1}) = \{ \vec 0\} . So rank(V1)+rank(W1)rank(V). {\rm{rank}}({V_1}) + {\rm{rank}}({W_1}) \le {\rm{rank}}(V).

If for every aforementioned pair (V1, W1), we have rank(V1) + rank(W1) < rank(V), then T is fully indecomposable by Definition 3. So the result follows by Lemma 6.

Let us assume that there is such a pair (V1, W1) satisfying rank(V1) + rank(W1) = rank(V).

Since Im(V1)Im(W1)={0} {\rm{Im}}({V_1}) \cap {\rm{Im}}({W_1}) = \{ \vec 0\} , there is SVkV such that SV1S* = V1 and S(VV1)S* = W1. Define T(X) = S*T (SXS*)S. Notice that tr(T(V1)(VV1)) = 0.

Next, since T is self-adjoint so is T, hence tr(T(VV1)V1) = 0.

These last two equalities imply that (2) T(V1MkV1)V1MkV1andT((VV1)Mk(VV1))(VV1)Mk(VV1). T'({V_1}{{\cal M}_k}{V_1}) \subset {V_1}{{\cal M}_k}{V_1}\;\;{\rm{and}}\;\;T'((V - {V_1}){{\cal M}_k}(V - {V_1})) \subset (V - {V_1}){{\cal M}_k}(V - {V_1}).

Of course the restrictions T|V1kV1 and T|(VV1)ℳk(VV1) are self-adjoint and there is no vIm(V1)\{0} v \in {\mathop{\rm Im}\nolimits} ({V_1})\backslash \{ \vec 0\} or vIm(VV1)\{0} v \in {\mathop{\rm Im}\nolimits} (V - {V_1})\backslash \{ \vec 0\} such that tr(T(vv*)vv*) = 0.

By induction hypothesis, there are R1V1kV1 and R2 ∈ (VV1)ℳk(VV1) such that

  • R1*T(R1()R1*)R1:V1MkV1V1MkV1 R_1^*T'({R_1}( \cdot )R_1^*){R_1}:{V_1}{{\cal M}_k}{V_1} \to {V_1}{{\cal M}_k}{V_1} is doubly stochastic, i.e., (3) R1*T(R1(V1)R1*)R1=V1 R_1^*T'({R_1}({V_1})R_1^*){R_1} = {V_1}

  • R2*T(R2()R2*)R2:(VV1)Mk(VV1)(VV1)Mk(VV1) R_2^*T'({R_2}( \cdot )R_2^*){R_2}:(V - {V_1}){{\cal M}_k}(V - {V_1}) \to (V - {V_1}){{\cal M}_k}(V - {V_1}) is doubly stochastic, i.e., (4) R2*T(R2(VV1)R2*)R2=VV1 R_2^*T'({R_2}(V - {V_1})R_2^*){R_2} = V - {V_1}

Set R = R1 + R2VkV. Notice that T(X) = R*T(RXR*)R is self-adjoint and T(V)=T(V1+VV1)=T(V1)+T(VV1)=R*T(RV1R*)R+R*T(R(VV1)R*)R=R*T(R1V1R1*)R+R*T(R2(VV1)R2*)R=R1*T(R1V1R1*)R1+R2*T(R2(VV1)R2*)R2,byequation2,=V1+(VV1)=V,byequations3and4. \matrix{ {T''(V)} \hfill & { = T''({V_1} + V - {V_1}) = T''({V_1}) + T''(V - {V_1})} \hfill \cr {} \hfill & { = {R^*}T'(R{V_1}{R^*})R + {R^*}T'(R(V - {V_1}){R^*})R} \hfill \cr {} \hfill & { = {R^*}T'({R_1}{V_1}R_1^*)R + {R^*}T'({R_2}(V - {V_1})R_2^*)R} \hfill \cr {} \hfill & { = R_1^*T'({R_1}{V_1}R_1^*){R_1} + R_2^*T'({R_2}(V - {V_1})R_2^*){R_2},\;{\rm{by}}\;{\rm{equation}}\;2,} \hfill \cr {} \hfill & { = {V_1} + (V - {V_1}) = V,\;{\rm{by}}\;{\rm{equations}}\;3\;{\rm{and}}\;4.} \hfill \cr }

Hence, T : VkVVkV is doubly stochastic.

Corollary 1

Let A ∈ ℳk⊗ℳk be an Hermitian matrix such that GA : ℳk → ℳk is a self-adjoint positive map and tr(A(vv*vv*)) > 0 for every v ∈ ℂk ∖{0}. There is an invertible matrix R ∈ ℳk such that (R*R*)A(RR)=i=1nλiγiγi ({R^*} \otimes {R^*})A(R \otimes R) = \sum\nolimits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes {\gamma _i} , where

  • (1)

    λ1 = 1 and γ1=Idk {\gamma _1} = {{Id} \over {\sqrt k }}

  • (2)

    λi ∈ ℝ and γi=γi* {\gamma _i} = \gamma _i^* for every i,

  • (3)

    1 ≥ |λi| for every i,

  • (4)

    tr(γiγj) = 0 for every ij and tr(γi2)=1 tr(\gamma _i^2) = 1 for every i.

Proof

By the definition of GA : ℳk → ℳk (given in the introduction), notice that 0<tr(A(vv*vv*))=tr(GA(vv*)vv*). 0 < tr(A(v{v^*} \otimes v{v^*})) = tr({G_A}(v{v^*})v{v^*}).

Hence v ∉ ker GA(vv*) for every v ∈ ℂk ∖ {0}.

By Lemma 7, there is an invertible matrix R ∈ ℳk such that R*GA(RXR*)R is doubly stochastic.

Define B = (R*R*)A(RR) and notice that GB(X) = R*GA(RXR*)R. Therefore, GB is a self-adjoint doubly stochastic map, i.e., GBIdk=Idk {G_B}\left( {{{Id} \over {\sqrt k }}} \right) = {{Id} \over {\sqrt k }} .

Let γ1=Idk,γ2,,γk2 {\gamma _1} = {{Id} \over {\sqrt k }},{\gamma _2}, \ldots ,{\gamma _{{k^2}}} be an orthonormal basis of ℳk formed by Hermitian eigenvectors of the self-adjoint positive map GB : ℳk → ℳk such that

  • GB(γi) = λiγi, where |λi| > 0 for 1 ≤ in

  • GB(γi) = 0, for i > n.

Since GB is a positive map satisfying GB(Id) = Id, its spectral radius is 1 [24, Theorem 2.3.7]. So |λi| ≤ 1 for every i.

Finally, by the definition of GB, B=IdkGBIdk+γ2GB(γ2)++γk2GB(γk2)=i=1nλiγiγi. B = {{Id} \over {\sqrt k }} \otimes {G_B}\left( {{{Id} \over {\sqrt k }}} \right) + {\gamma _2} \otimes {G_B}({\gamma _2}) + \ldots + {\gamma _{{k^2}}} \otimes {G_B}({\gamma _{{k^2}}}) = \sum\limits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes {\gamma _i}.

The next theorem shows that SPC states and invariant under realignment states can be put in the filter normal form retaining their SPC structure and invariance under realignment, respectively.

Theorem 2

Let γ ∈ ℳk ⊗ ℳk be a positive semidefinite Hermitian matrix such that rank(γA) = k. There is an invertible matrix R ∈ ℳk such that

  • (1)

    (R*R*)γ(RR)=i=1nλiγiγi ({R^*} \otimes {R^*})\gamma (R \otimes R) = \sum\nolimits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes {\gamma _i} , if ℛ(γΓ) is positive semidefinite;

  • (2)

    (R*Rt)γ(RR¯)=i=1nλiγiγi¯ ({R^*} \otimes {R^t})\gamma (R \otimes \overline R ) = \sum\nolimits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes \overline {{\gamma _i}} , if ℛ(γ) is positive semidefinite,

    where

    • a)

      λ1=1k {\lambda _1} = {1 \over k} and γ1=Idk {\gamma _1} = {{Id} \over {\sqrt k }}

    • b)

      1kλi>0 {1 \over k} \ge {\lambda _i} > 0 and γi=γi* {\gamma _i} = \gamma _i^* for every i,

    • c)

      tr(γiγj) = 0 for every ij and tr(γi2)=1 tr(\gamma _i^2) = 1 for every i.

Proof

(1) If γ is a state such that ℛ(γΓ) is positive semidefinite, then, by [5, corollary 25], γ can be written as γ=i=1naiBiBi \gamma = \sum\nolimits_{i = 1}^n {a_i}{B_i} \otimes {B_i} , where ai > 0, Bi=Bi* {B_i} = B_i^* and tr(Bi2)=1 tr(B_i^2) = 1 for every i, and tr(BiBj) = 0 for ij.

Hence Gγ(X)=i=1naiBitr(BiX) {G_\gamma }(X) = \sum\nolimits_{i = 1}^n {a_i}{B_i}tr({B_i}X) is a self-adjoint map with positive eigenvalues a1, …, an and possibly some null eigenvalues. In addition, since γ is positive semidefinite, Gγ (X) is a positive map.

Now, let v ∈ ℂk be such that 0=tr(γ(vv*vv*))=i=1naitr(Bivv*)2. 0 = tr(\gamma (v{v^*} \otimes v{v^*})) = \sum\nolimits_{i = 1}^n {a_i}tr{({B_i}v{v^*})^2}.

Since ai > 0 and tr(Bi vv*) ∈ ℝ for every i, tr(Bi vv*) = 0 for every i. Therefore, tr(γAvv*)=i=1naitr(Bi)tr(Bivv*)=0. tr({\gamma _A}v{v^*}) = \sum\nolimits_{i = 1}^n {a_i}tr({B_i})tr({B_i}v{v^*}) = 0.

By hypothesis γA is positive definite, hence v = 0.

So, by Corollary 1, there is a invertible matrix R such that (5) (R*R*)γ(RR)=i=1nλiγiγi ({R^*} \otimes {R^*})\gamma (R \otimes R) = \sum\limits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes {\gamma _i} satisfies the four conditions of that corollary. It remains to show that λi > 0 and then we multiply Equation (5) by 1k {1 \over k} to obtain our desired result.

Finally, since Gγ has only non-negative eigenvalues and λ1, …, λn are non-null eigenvalues of R*Gγ (RXR*)R (as seen in the proof of Corollary 1), λ1, …, λn are positive.

(2) If γ is a state such that ℛ(γ) is positive semidefinite, then, by [5, Corollary 25], γ can be written as γ=i=1naiBiBi¯ \gamma = \sum\nolimits_{i = 1}^n {a_i}{B_i} \otimes \overline {{B_i}} , where ai > 0, Bi=Bi* {B_i} = B_i^* and tr(Bi2)=1 tr(B_i^2) = 1 for every i, and tr(BiBj) = 0 for ij.

Consider γΓ=i=1naiBiBi {\gamma ^\Gamma } = \sum\nolimits_{i = 1}^n {a_i}{B_i} \otimes {B_i} and notice that GγΓ(X) = Gγ (X)t is also a positive map. Now repeat the proof of item (1) for γΓ. Hence there is a invertible matrix R such that (6) (R*R*)γΓ(RR)=i=1nλiγiγi, ({R^*} \otimes {R^*}){\gamma ^\Gamma }(R \otimes R) = \sum\limits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes {\gamma _i}, where γi and λi satisfy all the required conditions. Finally, (R*Rt)γ(RR¯)=i=1nλiγiγi¯. ({R^*} \otimes {R^t})\gamma (R \otimes \overline R ) = \sum\limits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes \overline {{\gamma _i}} .

The next corollary says that after a local operation any state possess a Schmidt decomposition such that one of its matrices is a multiple of the identity.

Corollary 2

Let γ ∈ ℳk ⊗ ℳk be a state such that rank(γA) = k. There is an invertible matrix R ∈ ℳk such that (R*Id)γ(RId)=i=1naiγiδi , where

  • a)

    a1ai > 0, for every 1 ≤ in, and γ1=Idk {\gamma _1} = {{Id} \over {\sqrt k }} ,

  • b)

    γi=γi* {\gamma _i} = \gamma _i^* , δi=δi* {\delta _i} = \delta _i^* for every i,

  • c)

    tr(γiγj) = tr(δiδj) = 0 for every ij and tr(γi2)=tr(δi2)=1 tr(\gamma _i^2) = tr(\delta _i^2) = 1 .

Proof

First, since γ is a state, so is Fγ¯F F\overline \gamma F . Therefore, γ*Fγ¯F \gamma *F\overline \gamma F is positive semidefinite by item a) of Remark 1.

Now, by items (8) and (9) of Lemma 1, R(γ*Fγ¯F)=R(γ)R(γ)*. {\cal R}(\gamma *F\overline \gamma F) = {\cal R}(\gamma ){\cal R}{(\gamma )^*}.

It is not difficult to check that rank((γ*Fγ¯F)A)=k {\rm{rank}}({(\gamma *F\overline \gamma F)_A}) = k , we leave it for last.

By item (2) of Theorem 2, there is an invertible matrix R ∈ ℳk such that (R*Rt)(γ*Fγ¯F)(RR¯)=i=1nλiγiγi, ({R^*} \otimes {R^t})(\gamma *F\overline \gamma F)(R \otimes \overline R ) = \sum\nolimits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes {\gamma _i}, where

  • a)

    λ1=1k {\lambda _1} = {1 \over k} and γ1=Idk {\gamma _1} = {{Id} \over {\sqrt k }} ,

  • b)

    1kλi>0 {1 \over k} \ge {\lambda _i} > 0 and γi=γi* {\gamma _i} = \gamma _i^* for every i,

  • c)

    tr(γiγj) = 0 for every ij and tr(γi2)=1 tr(\gamma _i^2) = 1 for every i.

Define δ = (R*Id)γ(RId) and notice that δ*FδtF=δ*Fδ¯F=(R*Rt)(γ*Fγ¯F)(RR¯). \delta *F{\delta ^t}F = \delta *F\overline \delta F = ({R^*} \otimes {R^t})(\gamma *F\overline \gamma F)(R \otimes \overline R ).

Thus, Gδ*FδtFIdk=λ1Idk {G_{\delta *F{\delta ^t}F}}\left( {{{Id} \over k}} \right) = {\lambda _1}{{Id} \over k} .

By item c) of Remark 1, FδGδIdk=Gδ*FδtFIdkt=λ1Idk {F_\delta }\left( {{G_\delta }\left( {{{Id} \over {\sqrt k }}} \right)} \right) = {G_{\delta *F{\delta ^t}F}}{\left( {{{Id} \over {\sqrt k }}} \right)^t} = {\lambda _1}{{Id} \over {\sqrt k }} . So Fδ(Gδ(Id)) = λ1Id.

By [24, Theorem 2.3.7], λ1 is the largest eigenvalue of the positive map FδGδ. So λ1 \sqrt {{\lambda _1}} is the largest singular value of Gδ and Fδ, since they are adjoints.

Next, let δ1=Idk,δ2,,δk2 {\delta _1} = {{Id} \over {\sqrt k }},{\delta _2}, \ldots ,{\delta _{{k^2}}} be an orthonormal basis of ℳk formed by Hermitian eigenvectors of FδGδ : ℳk → ℳk.

Notice that (R*Id)γ(RId)=δ=i=1k2δiGδ(δi) ({R^*} \otimes Id)\gamma (R \otimes Id) = \delta = \sum\nolimits_{i = 1}^{{k^2}} {\delta _i} \otimes {G_\delta }({\delta _i}) .

If Gδ(δi) ≠ 0k×k, for 1 ≤ in, then define ai = ‖Gδ(δi)‖2 > 0. Thus, δ=i=1naiδi1aiGδ(δi) \delta = \sum\nolimits_{i = 1}^n {a_i}{\delta _i} \otimes {1 \over {{a_i}}}{G_\delta }({\delta _i}) .

Notice that Gδ(δi)*=Gδ*(δi*)=Gδ(δi) {G_\delta }{({\delta _i})^*} = {G_{{\delta ^*}}}(\delta _i^*) = {G_\delta }({\delta _i}) , since δ and δi are Hermitian matrices. Moreover, by the definition of ai, tr1aiGδ(δi)1aiGδ(δi)=1,1in. tr\left( {{1 \over {{a_i}}}{G_\delta }({\delta _i}){1 \over {{a_i}}}{G_\delta }({\delta _i})} \right) = 1,\;\;\;1 \le i \le n.

In addition, since Fδ(Gδ(δj))) is a multiple of δj, δi is orthogonal to Fδ(Gδ(δj))) for ij, tr1aiGδ(δi)1ajGδ(δj)=1aiajtr(δiFδ(Gδ(δj)))=0. tr\left( {{1 \over {{a_i}}}{G_\delta }({\delta _i}){1 \over {{a_j}}}{G_\delta }({\delta _j})} \right) = {1 \over {{a_i}{a_j}}}tr({\delta _i}{F_\delta }({G_\delta }({\delta _j}))) = 0.

Finally, a12=tr(Gδ(δ1)2)=tr(δ1Fδ(Gδ(δ1)))=λ1tr(δ12)=λ1 a_1^2 = tr({G_\delta }{({\delta _1})^2}) = tr({\delta _1}{F_\delta }({G_\delta }({\delta _1}))) = {\lambda _1}tr(\delta _1^2) = {\lambda _1} , so a1=λ1 {a_1} = \sqrt {{\lambda _1}} . Notice also that, for every i, ai ≤ the largest singular value of Gδ, which is λ1=a1 \sqrt {{\lambda _1}} = {a_1} .

It remains to check that rank((γ*Fγ¯F)A)=k {\rm{rank}}({(\gamma *F\overline \gamma F)_A}) = k . Indeed, since Fγ and Gγ are adjoints, Im(FγGγ ) = Im(Fγ ). So there are Hermitian matrices Y, Z ∈ ℳk such that Fγ (Gγ (Y + iZ)) = Fγ (Id) = γA, which is positive definite. Since Fγ and Gγ are also positive maps, they leave the set of Hermitian matrices invariant, hence Fγ (Gγ (Y )) = γA.

There is λ > 0 such that IdλY is positive semidefinite, so Fγ (Gγ (IdλY )) is positive semidefinite too. Thus, Fγ (Gγ (Id)) = Fγ (Gγ (IdλY )) + λγA, which is positive definite.

Finally, since γ and γ*Fγ¯F \gamma *F\overline \gamma F are Hermitian matrices Fγ*Fγ¯F()=Gγ*Fγ¯F*()=((FγGγ)t)*=Gγ*Fγ*(()t)=FγGγ(()t), {F_{\gamma *F\overline \gamma F}}( \cdot ) = G_{\gamma *F\overline \gamma F}^*( \cdot ) = {({({F_\gamma } \circ {G_\gamma })^t})^*} = G_\gamma ^* \circ F_\gamma ^*({( \cdot )^t}) = {F_\gamma } \circ {G_\gamma }({( \cdot )^t}), where the second equality comes from item c) of Remark 1.

Therefore (γ*Fγ¯F)A=Fγ*Fγ¯F(Id)=Fγ(Gγ(Id)) {(\gamma *F\overline \gamma F)_A} = {F_{\gamma *F\overline \gamma F}}(Id) = {F_\gamma }({G_\gamma }(Id)) , which is positive definite.

For the last result of this section, we finally prove that every PPT state whose rank coincides with its two reduced ranks can be put in the filter normal form. This is the key result to prove Theorem 6. See how the relations between linear contractions (Proposition 1) are important in proving this result.

Theorem 3

If γ ∈ ℳk ⊗ ℳk is a PPT state such that rank(γ) = rank(γA) = rank(γB) = k, then there are invertible matrices R, S ∈ ℳk such that δA=δB=1kId {\delta _A} = {\delta _B} = {1 \over k}Id , where δ = (RS)γ(R*S*).

Proof

First, define γ1 = (IdS)γ(IdS*) such that (γ1)B=1kId {({\gamma _1})_B} = {1 \over k}Id , where S is invertible.

Since γ1 is also PPT, γ1(γ1)B=1k {\left\| {\gamma _1}\right\|_\infty } \le {\left\| {({\gamma _1})_B}\right\|_\infty } = {1 \over k} , by Theorem 1.

Hence 1=tr((γ1)B)=tr(γ1)γ1rank(γ1)=1k.k=1 1 = tr({({\gamma _1})_B}) = tr({\gamma _1}) \le {\left\| {\gamma _1}\right\|_\infty }\,{\rm{rank}}({\gamma _1}) = {1 \over k}.k = 1 . So γ1 has k eigenvalues equal to 1k {1 \over k} and the others 0.

Notice that (Fγ1¯F)Γ {(F\overline {{\gamma _1}} F)^\Gamma } is positive semidefinite and so is (γ1*Fγ1¯F)Γ=γ1*(Fγ1¯F)Γ {({\gamma _1}*F\overline {{\gamma _1}} F)^\Gamma } = {\gamma _1}*{(F\overline {{\gamma _1}} F)^\Gamma } as a *–product of two positive semidefinite Hermitian matrices by item a) of Remark 1. So the positive semidefinite Hermitian matrix γ1*Fγ1¯F {\gamma _1}*F\overline {{\gamma _1}} F is PPT.

Now, by items (8) and (9) of Lemma 1, R(γ1*Fγ1¯F)=R(γ1)R(γ1)* {\cal R}({\gamma _1}*F\overline {{\gamma _1}} F) = {\cal R}({\gamma _1}){\cal R}{({\gamma _1})^*} , which is positive semidefinite.

Next, on the one hand tr(R(γ1*Fγ1¯F))=tr(R(γ1)R(γ1)*)=tr(γ1γ1*)=1k tr({\cal R}({\gamma _1}*F\overline {{\gamma _1}} F)) = tr({\cal R}({\gamma _1}){\cal R}{({\gamma _1})^*}) = tr({\gamma _1}\gamma _1^*) = {1 \over k} , since ℛ is an isometry.

On the other hand R(γ1*Fγ1¯F)Γ1=R(γ1*(Fγ1¯F)F)1(byitem6ofLemma1)=R(γ1*(Fγ1¯F)F)F1(sinceFisunitary)=(γ1*(Fγ1¯F))Γ1(byitem4ofLemma1)=tr(γ1*(Fγ1¯F))(sinceγ1*(Fγ1¯F)isPPT)=tr((γ1)B(γ1)B*)=1k(byitemc)ofRemark1). \matrix{ {{{\left\| {{\cal R}{{({\gamma _1}*F\overline {{\gamma _1}} F)}^\Gamma }} \right\|}_1}} \hfill & { = \;{{\left\| {{\cal R}({\gamma _1}*(F\overline {{\gamma _1}} F)F)} \right\|}_1}\;({\rm{by}}\;{\rm{item}}\;\left( 6 \right)\;{\rm{of}}\;{\rm{Lemma}}\;1)} \hfill \cr {} \hfill & { = \;{\left\| {\cal R}({\gamma _1}*(F\overline {{\gamma _1}} F)F)F\right\|_1}\;({\rm{since}}\;F\;{\rm{is}}\;{\rm{unitary}})} \hfill \cr {} \hfill & { = \;{\left\| {{({\gamma _1}*(F\overline {{\gamma _1}} F))}^\Gamma }\right\|_1}\;({\rm{by}}\;{\rm{item}}\;\left( 4 \right)\;{\rm{of}}\;{\rm{Lemma}}\;1)} \hfill \cr {} \hfill & { = tr({\gamma _1}*(F\overline {{\gamma _1}} F))\;({\rm{since}}\;{\gamma _1}*(F\overline {{\gamma _1}} F)\;{\rm{is}}\;{\rm{PPT}})} \hfill \cr {} \hfill & { = tr({{({\gamma _1})}_B}({\gamma _1})_B^*) = {1 \over k}\;({\rm{by}}\;{\rm{item}}\;c)\;{\rm{of}}\;{\rm{Remark}}\;1).} \hfill \cr }

Therefore, tr(R(γ1*Fγ1¯F))=R(γ1*Fγ1¯F)Γ1 tr({\cal R}({\gamma _1}*F\overline {{\gamma _1}} F)) = {\left\| {\cal R}{({\gamma _1}*F\overline {{\gamma _1}} F)^\Gamma }\right\|_1} .

Since R(γ1*Fγ1¯F) {\cal R}({\gamma _1}*F\overline {{\gamma _1}} F) is positive semidefinite, this last equality means that R(γ1*Fγ1¯F)Γ {\cal R}{({\gamma _1}*F\overline {{\gamma _1}} F)^\Gamma } is positive semidefinite, i.e., R(γ1*Fγ1¯F) {\cal R}({\gamma _1}*F\overline {{\gamma _1}} F) is PPT.

We have just discovered that R(γ1*Fγ1¯F) {\cal R}({\gamma _1}*F\overline {{\gamma _1}} F) and γ1*Fγ1¯F {\gamma _1}*F\overline {{\gamma _1}} F are PPT, but in this situation Lemma 2 says that γ1*Fγ1¯F=R(γ1*Fγ1¯F) {\gamma _1}*F\overline {{\gamma _1}} F = {\cal R}({\gamma _1}*F\overline {{\gamma _1}} F) .

Now, by Corollary 2, there is an invertible matrix R ∈ ℳk such that γ2=(R*Id)γ1(RId)=i=1nλiAiBi, {\gamma _2} = ({R^*} \otimes Id){\gamma _1}(R \otimes Id) = \sum\limits_{i = 1}^n {\lambda _i}{A_i} \otimes {B_i}, where

  • a)

    λ1λi > 0 for every i and A1=Idk {A_1} = {{Id} \over {\sqrt k }} ,

  • b)

    Ai=Ai* {A_i} = A_i^* , Bi=Bi* {B_i} = B_i^* for every i,

  • c)

    tr(AiAj) = tr(BiBj) = 0 for every ij and tr(Ai2)=tr(Bi2)=1 tr(A_i^2) = tr(B_i^2) = 1 .

In addition, we can normalize its trace, so assume that tr(γ2) = 1.

Hence, γ2*(Fγ2¯F)=i=1nλi2AiAi¯=(R*Rt)(γ1*(Fγ1¯F))(RR¯) {\gamma _2}*(F\overline {{\gamma _2}} F) = \sum\nolimits_{i = 1}^n \lambda _i^2{A_i} \otimes \overline {{A_i}} = ({R^*} \otimes {R^t})({\gamma _1}*(F\overline {{\gamma _1}} F))(R \otimes \overline R ) .

As γ1*(Fγ1¯F) {\gamma _1}*(F\overline {{\gamma _1}} F) , the positive semidefinite Hermitian matrix γ2*(Fγ2¯F) {\gamma _2}*(F\overline {{\gamma _2}} F) is also invariant under realignment because R(γ2*(Fγ2¯F))=R((R*Rt)(γ1*(Fγ1¯F))(RR¯))=(R*Rt)R(γ1*(Fγ1¯F))(RR¯),byitem(3)oflemma1,=(R*Rt)(γ1*(Fγ1¯F))(RR¯),sinceγ1*Fγ1¯F=R(γ1*Fγ1¯F).=γ2*(Fγ2¯F). \matrix{ {{\cal R}({\gamma _2}*(F\overline {{\gamma _2}} F))} \hfill & { = {\cal R}(({R^*} \otimes {R^t})({\gamma _1}*(F\overline {{\gamma _1}} F))(R \otimes \overline R ))} \hfill \cr {} \hfill & { = ({R^*} \otimes {R^t}){\cal R}({\gamma _1}*(F\overline {{\gamma _1}} F))(R \otimes \overline R ),\;{\rm{by}}\;{\rm{item}}\;(3)\;{\rm{of}}\;{\rm{lemma}}\;1,} \hfill \cr {} \hfill & { = ({R^*} \otimes {R^t})({\gamma _1}*(F\overline {{\gamma _1}} F))(R \otimes \overline R ),\;{\rm{since}}\;{\gamma _1}*F\overline {{\gamma _1}} F = {\cal R}({\gamma _1}*F\overline {{\gamma _1}} F).} \hfill \cr {} \hfill & { = {\gamma _2}*(F\overline {{\gamma _2}} F).} \hfill \cr } Now,noticethatkλ12=trγ2*(Fγ2¯F)(A1A1¯)k=trγ2*(Fγ2¯F)(IdkIdk)k=tr(γ2*(Fγ2¯F))=tr(R(γ2*(Fγ2¯F))),sinceγ2*Fγ2¯F=R(γ2*Fγ2¯F)=tr(R(γ2)R(γ2)*),byitem(89)ofLemma1=tr(γ2γ2*),sinceRisanisometry. \matrix{ {{\rm{Now}},\;{\rm{notice}}\;{\rm{that}}\;k\lambda _1^2} \hfill & { = tr\left( {{\gamma _2}*(F\overline {{\gamma _2}} F)\;({A_1} \otimes \overline {{A_1}} )} \right)k} \hfill \cr {} \hfill & { = tr\left( {{\gamma _2}*(F\overline {{\gamma _2}} F)\;({{Id} \over {\sqrt k }} \otimes {{Id} \over {\sqrt k }})} \right)k} \hfill \cr {} \hfill & { = tr({\gamma _2}*(F\overline {{\gamma _2}} F))} \hfill \cr {} \hfill & { = tr({\cal R}({\gamma _2}*(F\overline {{\gamma _2}} F))),\;{\rm{since}}\;{\gamma _2}*F\overline {{\gamma _2}} F = {\cal R}({\gamma _2}*F\overline {{\gamma _2}} F)} \hfill \cr {} \hfill & { = \;tr({\cal R}({\gamma _2}){\cal R}{{({\gamma _2})}^*}),\;{\rm{by}}\;{\rm{item}}\;(8 - 9)\;{\rm{of}}\;{\rm{Lemma}}\;1} \hfill \cr {} \hfill & { = tr({\gamma _2}\gamma _2^*),\;{\rm{since}}\;{\cal R}\;{\rm{is}}\;{\rm{an}}\;{\rm{isometry}}.} \hfill \cr }

Next, γ22R(γ2)2 \left\| {{\gamma _2}} \right\|_\infty ^2 \le \left\| {{\cal R}({\gamma _2})} \right\|_\infty ^2 , since γ2 is PPT and Theorem 1.

Notice that the largest singular value of Gγ2 is λ1 by item a) above and the definition of Gγ2. Hence, by Lemma 3, ‖ℛ(γ2)‖ = λ1. Moreover, remind that rank(γ2) = rank(γ1) = rank(γ) = k. Therefore, kλ12=tr(γ22)γ22rank(γ2)λ12.k. k\lambda _1^2 = tr(\gamma _2^2) \le \left\| {{\gamma _2}} \right\|_\infty ^2\;{\rm{rank}}({\gamma _2}) \le \lambda _1^2.k.

The inequalities above are, in fact, equalities, which only happens when all the k non-null eigenvalues of γ2 are equal to λ1. Therefore, 1 = tr(γ2) = 1. So λ1=1k {\lambda _1} = {1 \over k} . In addition, 1=tr(γ2)=λ1tr(A1)tr(B1)=1kktr(B1). 1 = tr({\gamma _2}) = {\lambda _1}tr({A_1})tr({B_1}) = {1 \over k}\sqrt k \;tr({B_1}).

So tr(B1)=k tr({B_1}) = \sqrt k and tr(B12)=1 tr(B_1^2) = 1 . Recall that Gγ21λ1A1=B1 {G_{{\gamma _2}}}\left( {{1 \over {{\lambda _1}}}{A_1}} \right) = {B_1} is a positive semidefinite Hermitian matrix, since Gγ2is a positive map and 1λ1A1=kId {1 \over {{\lambda _1}}}{A_1} = \sqrt k Id . Under these conditions the only possibility for B1 is B1=Idk {B_1} = {{Id} \over {\sqrt k }} . Therefore (γ2)B=Gγ2(Id)=Gγ2(kA1)=Idk,(γ2)A=Fγ2(Id)=Fγ2(kB1)=Idk. {({\gamma _2})_B} = {G_{{\gamma _2}}}(Id) = {G_{{\gamma _2}}}(\sqrt k {A_1}) = {{Id} \over k},\;\;\;{({\gamma _2})_A} = {F_{{\gamma _2}}}(Id) = {F_{{\gamma _2}}}(\sqrt k {B_1}) = {{Id} \over k}.

Finally, notice that γ2 = (R*Id)γ1(RId) = (R*S)γ(RS*). The proof is complete.

5.
Lower Bound for the Rank of the Special Triad

In this section, we prove that rank(γ) ≥ k, whenever a state γ is PPT or SPC or invariant under realignment and rank(γA) = rank(γB) = k. Then we show that if rank(γ) = k, then γ is separable in each of these cases.

Explicit examples of states satisfying the hypotheses of these theorems are the separable states γ1=i=1kaiai*aiai*,γ2=i=1kaiai*ai¯ait, {\gamma _1} = \sum\nolimits_{i = 1}^k {a_i}a_i^* \otimes {a_i}a_i^*,\;\;\;{\gamma _2} = \sum\nolimits_{i = 1}^k {a_i}a_i^* \otimes \overline {{a_i}} a_i^t, where a1, …, ak is any basis of ℂk. It is clear that their reduced ranks and their ranks are equal to k. In addition, it is clear that γ1 is positive under partial transpose and R(γ1Γ)=R(γ2)=γ2 {\cal R}(\gamma _1^\Gamma ) = {\cal R}({\gamma _2}) = {\gamma _2} . Thus γ1 is also SPC and γ2 is invariant under realignment.

We start this section by proving the SPC and the invariant under realignment cases. In their proofs we use the result that guarantees their separability whenever their non-null Schmidt coefficients are equal, which follows from the complete reducibility property [5, Proposition 15].

5.1.
First Cases: SPC States and Invariant under Realignment States

Before proving the inequality, notice that by the symmetry of their Schmidt decompositions [5, Corollary 25], γB = γA or γB=γA¯ {\gamma _B} = \overline {{\gamma _A}} , when γ is SPC or invariant under realignment, respectively. Hence rank(γA) = rank(γB). In addition, we can assume without loss of generality that rank(γA) = k, otherwise we would be able to embed γ in ℳs ⊗ ℳs, where s = rank(γA), and obtain the same result.

Theorem 4

If γ ∈ ℳk ⊗ ℳk is an SPC state such that rank(γA) = k, then rank(γ) ≥ k. In addition, if the equality holds, then γ is separable.

Proof

By Theorem 2, there is a invertible matrix R such that δ=(R*R*)γ(RR)=i=1nλiγiγi, \delta = ({R^*} \otimes {R^*})\gamma (R \otimes R) = \sum\nolimits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes {\gamma _i}, where λ1=1k {\lambda _1} = {1 \over k} , γ1=Idk {\gamma _1} = {{Id} \over {\sqrt k }} , tr(γiγ1)=tr(γi)k=0 tr({\gamma _i}{\gamma _1}) = {{tr({\gamma _i})} \over {\sqrt k }} = 0 for i > 1. So δA=ı=1nλiγitr(γi)=Idk {\delta _A} = \sum\limits_{i = 1}^n {{\lambda _i}{\gamma _i}tr({\gamma _i})} = {{Id} \over k} .

Now, notice that δ still is SPC by [5, Corollary 25]. Hence δδA=1k {\left\| \delta \right\|_\infty } \le {\left\| {\delta _A}\right\|_\infty } = {1 \over k} , by Theorem 1.

So 1kδtr(δ)rank(δ)=1rank(δ) {1 \over k} \ge {\left\| \delta \right\|_\infty } \ge {{tr(\delta )} \over {{\rm{rank}}(\delta )}} = {1 \over {{\rm{rank}}(\delta )}} . Hence rank(δ) ≥ k.

Since R is invertible, rank(γ) = rank(δ) ≥ k.

For the next part, assume rank(γ) = k. Then rank(δ) = k. Therefore, 1=tr(δ)δrank(δ)=1k.k=1. 1 = tr(\delta ) \le {\left\| \delta \right\|_\infty }{\rm{rank}}(\delta ) = {1 \over k}.k = 1

Since the equality tr(δ) = ‖δ rank(δ) holds, the non-null eigenvalues of δ are equal to ‖δ. So tr(δ) = kδ = 1. Hence δ=1k {\left\| \delta \right\|_\infty } = {1 \over k} and tr(δ2)=1k tr({\delta ^2}) = {1 \over k} .

Next, since the linear contraction – R((⋅)Γ) – preserves the Frobenius norm of δ, (7) 1k=tr(δ2)=tr(R(δΓ)R(δΓ)*)R(δΓ)R(δΓ)1. {1 \over k} = tr({\delta ^2}) = tr({\cal R}({\delta ^\Gamma }){\cal R}{({\delta ^\Gamma })^*}) \le {\left\| {\cal R}({\delta ^\Gamma })\right\|_\infty }{\left\| {\cal R}({\delta ^\Gamma })\right\|_1}.

By item (5) of Lemma 1, ℛ(δΓ) = ℛ(δ)F. Since F is unitary, ‖ℛ(δ)F = ‖ℛ(δ)‖.

Therefore, R(δΓ)=R(δ)F=R(δ)=λ1=1k. {\left\| {\cal R}({\delta ^\Gamma })\right\|_\infty } = {\left\| {\cal R}(\delta )F\right\|_\infty } = {\left\| {\cal R}(\delta )\right\|_\infty } = {\lambda _1} = {1 \over k}.

Now, since δ is SPC, by its definition, ℛ(δΓ) is positive semidefinite. Hence R(δΓ)1R(δΓ)Γ1. {\left\| {\cal R}({\delta ^\Gamma })\right\|_1} \le {\left\| {\cal R}{({\delta ^\Gamma })^\Gamma }\right\|_1}.

By item (7) of Lemma 1, ℛ(δΓ)Γ = δF. Thus, ‖ℛ(δΓ)‖1 ≤ ‖ℛ(δΓ)Γ1 = ‖δF1 = ‖δ1 = 1.

Using these pieces of information in Equation (7) we obtain (8) 1k=tr(R(δΓ)R(δΓ)*)R(δΓ)R(δΓ)11k.1. {1 \over k} = tr({\cal R}({\delta ^\Gamma }){\cal R}{({\delta ^\Gamma })^*}) \le {\left\| {\cal R}({\delta ^\Gamma })\right\|_\infty }{\left\| {\cal R}({\delta ^\Gamma })\right\|_1} \le {1 \over k}.1.

Again, tr(ℛ(δΓ)2) = ‖ℛ(δΓ)‖‖ℛ(δΓ)‖1 only holds if the non-null eigenvalues of the positive semidefinite Hermitian matrix ℛ(δΓ) are equal to R(δΓ)=λ1=1k {\left\| {\cal R}({\delta ^\Gamma })\right\|_\infty } = {\lambda _1} = {1 \over k} .

Finally, the non-null eigenvalues of ℛ(δΓ) are the singular values of Gγ, which are the non-null Schmidt coefficients of δ, as explained in the proof of Lemma 3. Since δ is SPC and its non-null Schmidt coefficients are equal, δ is separable by [5, Proposition 15]. Since R is invertible, γ is separable too.

The invariant under realignment counterpart is proved next in a similar way with minor modifications.

Theorem 5

If γ ∈ ℳk ⊗ ℳk is an invariant under realignment state such that rank(γA) = k, then rank(γ) ≥ k. In addition, if the equality holds, then γ is separable.

Proof

By Theorem 2, there is a invertible matrix R such that δ=(R*Rt)γ(RR¯)=i=1nλiγiγi¯, \delta = ({R^*} \otimes {R^t})\gamma (R \otimes \overline R ) = \sum\nolimits_{i = 1}^n {\lambda _i}{\gamma _i} \otimes \overline {{\gamma _i}} , where λ1=1k {\lambda _1} = {1 \over k} , γ1=Idk {\gamma _1} = {{Id} \over {\sqrt k }} , tr(γiγ1)=tr(γi)k=0 tr({\gamma _i}{\gamma _1}) = {{tr({\gamma _i})} \over {\sqrt k }} = 0 for i > 1. So δA=ı=1nλiγitr(γi)=Idk {\delta _A} = \sum\limits_{i = 1}^n {{\lambda _i}{\gamma _i}tr({\gamma _i})} = {{Id} \over k} .

Now, by item (3) of Lemma 1, R(δ)=R((R*Rt)γ(RR¯))=(R*Rt)R(γ)(RR¯)=(R*Rt)γ(RR¯)=δ. {\cal R}(\delta ) = {\cal R}(({R^*} \otimes {R^t})\gamma (R \otimes \overline R )) = ({R^*} \otimes {R^t}){\cal R}(\gamma )(R \otimes \overline R ) = ({R^*} \otimes {R^t})\gamma (R \otimes \overline R ) = \delta .

Thus, δ is invariant under realignment. Hence δδA=1k {\left\| \delta \right\|_\infty } \le {\left\| {\delta _A}\right\|_\infty } = {1 \over k} , by Theorem 1.

So 1kδtr(δ)rank(δ)=1rank(δ) {1 \over k} \ge {\left\| \delta \right\|_\infty } \ge {{tr(\delta )} \over {{\rm{rank}}(\delta )}} = {1 \over {{\rm{rank}}(\delta )}} . Hence rank(δ) ≥ k.

Since R is invertible, rank(γ) = rank(δ) ≥ k.

For the next part, assume rank(γ) = k. Then rank(δ) = k. Therefore, 1=tr(δ)δrank(δ)=1k.k=1 1 = tr(\delta ) \le {\left\| \delta \right\|_\infty }\;{\rm{rank}}(\delta ) = {1 \over k}.k = 1.

Since the equality tr(δ) = ‖δ rank(δ) holds, the non-null eigenvalues of δ are equal to ‖δ. Moreover, tr(δ) = kδ = 1. Hence δ=1k {\left\| \delta \right\|_\infty } = {1 \over k} .

Since ℛ(δ) = δ, the non-null singular values of ℛ(δ) are the non-null eigenvalues of δ, which are equal in this case. Hence the non-null Schmidt coefficients of the Schmidt decomposition of δ are equal by Lemma 3. We know that every invariant under realignment state with equal non-null Schmidt coefficients is separable by [5, Proposition 15]. So δ is separable and so is γ, since R is invertible.

5.2.
The PPT Counterpart

For our final results, we need some tools developed in Sections 2 and 3 together with the complete reducibility property. First, we show that the rank of a PPT state is greater or equal to its reduced ranks in the next lemma.

Lemma 8

Let γ ∈ ℳk ⊗ ℳm be a PPT state. Then rank(γ) ≥ max{rank(γA), rank(γB)}.

Proof

Let us assume without loss of generality that max{rank(γA), rank(γB)} = rank(γA) = k. So there is an invertible matrix R ∈ ℳk such that RγAR*=1kId R{\gamma _A}{R^*} = {1 \over k}Id . Define δ = (RId)γ(R*Id) and notice that δA=1kId {\delta _A} = {1 \over k}Id .

Since δ is PPT, by Theorem 1, δδA=1k {\left\| \delta \right\|_\infty } \le {\left\| {\delta _A}\right\|_\infty } = {1 \over k} . Hence 1=tr(δA)=tr(δ)δrank(δ)1krank(δ). 1 = tr({\delta _A}) = tr(\delta ) \le {\left\| \delta \right\|_\infty }\;{\rm{rank}}(\delta ) \le {1 \over k}{\rm{rank}}(\delta ).

Thus, rank(γ) = rank(δ) ≥ max{rank(γA), rank(γB)}.

We complete this paper by proving that a PPT state with minimal rank must be separable which is the PPT counterpart of the Theorems 4 and 5.

Theorem 6

If δ ∈ ℳk ⊗ ℳk is a PPT state such that rank(δ) = rank(δA) = rank(δB) = k, then δ is separable.

Proof

This result is trivial in ℳ2 ⊗ ℳ2, since every PPT state there is separable. Assume the result is true in ℳi ⊗ ℳi for i < k. Let us prove the result in ℳk ⊗ ℳk.

First of all, by Theorem 3, there are invertible matrices M, N ∈ ℳk such that γ = (MN )δ(M*N*) satisfies γA=γB=1kId {\gamma _A} = {\gamma _B} = {1 \over k}Id . Notice that γ is also PPT and its rank is equal to k.

Next, by Theorem 1, since γγA=1k {\left\| \gamma \right\|_\infty } \le \left\| {{\gamma _A}} \right\| = {1 \over k} and 1=tr(γA)=tr(γ)γrank(γ)1kk=1, 1 = tr({\gamma _A}) = tr(\gamma ) \le {\left\| \gamma \right\|_\infty }\;{\rm{rank}}(\gamma ) \le {1 \over k}k = 1, we obtain all k non-null eigenvalues of γ equal to 1k {1 \over k} .

Now, since γ is a positive semidefinite Hermitian matrix, the linear transformations Fγ and Gγ are positive maps and adjoints with respect to the trace inner product.

In addition, notice that Idk=γA=Fγ(Id) {{Id} \over k} = {\gamma _A} = {F_\gamma }(Id) and Idk=γB=Gγ(Id) {{Id} \over k} = {\gamma _B} = {G_\gamma }(Id) . Hence Fγ(Gγ(Id))=1k2Id {F_\gamma }({G_\gamma }(Id)) = {1 \over {{k^2}}}Id .

By [24, Theorem 2.3.7], the spectral radius of the positive operator FγGγ is 1k2 {1 \over {{k^2}}} . Hence the largest singular value of Gγ and Fγ is 1k {1 \over k} , since they are adjoints. Thus, Gγ(X)21kandFγ(X)21k,wheneverX2=1. {\left\| {G_\gamma }(X)\right\|_2} \le {1 \over k}\;{\rm{and}}\;{\left\| {F_\gamma }(X)\right\|_2} \le {1 \over k},\;{\rm{whenever}}\,{\left\| X\right\|_2} = 1.

Next, since γ has k linearly independent eigenvectors associated to 1k {1 \over k} , by combining 2 of them we can find an eigenvector v ∈ ℂk ⊗ ℂk associated to 1k {1 \over k} such that ‖v2 = 1 and rank(v) = m < k.

Notice that there are R, S ∈ ℳk with rank m such that v=RSuuasdefinedinRemark1andRR*2=SS*2=1. v = \left( {R \otimes S} \right)u\left( {u\;{\rm{as}}\;{\rm{defined}}\;{\rm{in}}\;{\rm{Remark}}\;1} \right)\;{\rm{and}}\;{\left\| {R{R^*}} \right\|_2} = {\left\| {S{S^*}} \right\|_2} = 1. Therefore1k=tr(γvv*)=tr((R*S*)γ(RS)uut)=tr((R*St)γΓ(RS¯)F)(Fasdefinedinitemc)ofRemark1)tr((R*St)γΓ(RS¯))(sinceγΓandIdFarepositivesemidefinite)=tr(γ(RR*SS*))=tr(Gγ(RR*)SS*)Gγ(RR*)2SS*21k.1sincethelargestsingularvalueofGγis1k \matrix{ {{\rm{Therefore}}\;{1 \over k}} \hfill & { = tr(\gamma v{v^*})} \hfill \cr {} \hfill & { = tr(({R^*} \otimes {S^*})\gamma (R \otimes S)u{u^t})} \hfill \cr {} \hfill & { = tr(({R^*} \otimes {S^t}){\gamma ^\Gamma }(R \otimes \overline S )F)\;(F\;{\rm{as}}\;{\rm{defined}}\;{\rm{in}}\;{\rm{item}}\;c)\;{\rm{of}}\;{\rm{Remark}}\;1)} \hfill \cr {} \hfill & { \le tr(({R^*} \otimes {S^t}){\gamma ^\Gamma }(R \otimes \overline S ))\;({\rm{since}}\;{\gamma ^\Gamma }\;{\rm{and}}\;Id - F\;{\rm{are}}\;{\rm{positive}}\;{\rm{semidefinite}})} \hfill \cr {} \hfill & { = tr(\gamma (R{R^*} \otimes S{S^*}))} \hfill \cr {} \hfill & { = tr({G_\gamma }(R{R^*})S{S^*})} \hfill \cr {} \hfill & { \le {\left\| {G_\gamma }(R{R^*})\right\|_2}{\left\| S{S^*}\right\|_2} \le {1 \over k}.1\;\left( {{\rm{since}}\;{\rm{the}}\;{\rm{largest}}\;{\rm{singular}}\;{\rm{value}}\;{\rm{of}}\;{G_\gamma }\;{\rm{is}}\;{1 \over k}} \right)} \hfill \cr }

Therefore, all the inequalities above are equalities, which imply

  • (1)

    Gγ (RR*) = λSS* for some λ > 0, since Gγ is a positive map, and

  • (2)

    1k=Gγ(RR*)2=λSS*2=λ {1 \over k} = \;{\left\| {G_\gamma }(R{R^*})\right\|_2} = \lambda {\left\| S{S^*}\right\|_2} = \lambda .

Hence Gγ(RR*)=1kSS* {G_\gamma }(R{R^*}) = {1 \over k}S{S^*} . Analogously, we get Fγ(SS*)=1kRR* {F_\gamma }(S{S^*}) = {1 \over k}R{R^*} , since tr(γ(RR*SS*)) = tr(RR*Fγ (SS*)).

Therefore, Fγ(Gγ(RR*))=1k2RR* {F_\gamma }({G_\gamma }(R{R^*})) = {1 \over {{k^2}}}R{R^*} and rank(RR*) = m < k.

Since γ is PPT, by the complete reducibility property, (9) γ=(VW)γ(VW)+(VW)γ(VW), \gamma = (V \otimes W)\gamma (V \otimes W) + ({V^ \bot } \otimes {W^ \bot })\gamma ({V^ \bot } \otimes {W^ \bot }), where V, W, V, W are orthogonal projections onto Im(RR*), Im(SS*), ker(RR*) and ker(SS*), respectively.

The proof is almost done, we just need to check that (VW )γ(VW ), (VW)γ(VW) are multiples of states satisfying the hypotheses and since rank(V ) = rank(W ) < k and rank(V) = rank(W) < k, the result follows by induction.

Notice that by Equation (9) and the definition of Gγ, Im(Gγ(V))Im(W),Im(Gγ(V))Im(W)andIm(Fγ(W))Im(V),Im(Fγ(W))Im(V). \matrix{ {{\rm{Im}}({G_\gamma }(V)) \subset {\rm{Im}}(W),{\rm{Im}}({G_\gamma }({V^ \bot })) \subset {\rm{Im}}({W^ \bot })\;{\rm{and}}} \hfill \cr {{\rm{Im}}({F_\gamma }(W)) \subset {\rm{Im}}(V),{\rm{Im}}({F_\gamma }({W^ \bot })) \subset {\rm{Im}}({V^ \bot }).\quad } \hfill \cr }

Next, recall that V + V = W + W = Id, V V = W W = 0 and Gγ(V)+Gγ(V)=Gγ(Id)=1kId=1kW+1kW,Fγ(W)+Fγ(W)=Fγ(Id)=1kId=1kV+1kV. \matrix{ {{G_\gamma }(V) + {G_\gamma }({V^ \bot }) = {G_\gamma }(Id) = {1 \over k}Id = {1 \over k}W + {1 \over k}{W^ \bot },} \hfill \cr {{F_\gamma }(W) + {F_\gamma }({W^ \bot }) = {F_\gamma }(Id) = {1 \over k}Id = {1 \over k}V + {1 \over k}{V^ \bot }.} \hfill \cr }

Therefore Gγ(V)=1kW,Fγ(W)=1kVandGγ(V)=1kW,Fγ(W)=1kV. {G_\gamma }(V) = {1 \over k}W,\;{F_\gamma }(W) = {1 \over k}V\;{\rm{and}}\;{G_\gamma }({V^ \bot }) = {1 \over k}{W^ \bot },{F_\gamma }({W^ \bot }) = {1 \over k}{V^ \bot }.

Now, define γ1=km(VW)γ(VW)andγ2=kkm(VW)γ(VW). {\gamma _1} = {k \over m}(V \otimes W)\gamma (V \otimes W)\;{\rm{and}}\;{\gamma _2} = {k \over {k - m}}({V^ \bot } \otimes {W^ \bot })\gamma ({V^ \bot } \otimes {W^ \bot }).

Notice that (γ1)A=Fγ1(Id)=kmFγ(W)=1mVand(γ1)B=Gγ1(Id)=kmGγ(V)=1mW. {({\gamma _1})_A} = {F_{{\gamma _1}}}(Id) = {k \over m}{F_\gamma }(W) = {1 \over m}V\;{\rm{and}}\;{({\gamma _1})_B} = {G_{{\gamma _1}}}(Id) = {k \over m}{G_\gamma }(V) = {1 \over m}W.

Thus, max{rank((γ1)A), rank((γ1)B)} = max{rank(V ), rank(W )} = max{rank(R), rank(S)} = m.

Moreover, notice that (γ2)A=Fγ2(Id)=kkmFγ(W)=1kmVand(γ2)B=Gγ2(Id)=kkmGγ(V)=1kmW. {({\gamma _2})_A} = {F_{{\gamma _2}}}(Id) = {k \over {k - m}}{F_\gamma }({W^ \bot }) = {1 \over {k - m}}{V^ \bot }\;{\rm{and}}\;{({\gamma _2})_B} = {G_{{\gamma _2}}}(Id) = {k \over {k - m}}{G_\gamma }({V^ \bot }) = {1 \over {k - m}}{W^ \bot }.

Therefore max{rank((γ2)A), rank((γ2)B)} = max{rank(V), rank(W)} = km.

By their definitions, γ1 and γ2 are PPT. So, by Lemma 8, rank(γ1) ≥ m and rank(γ2) ≥ km.

Recall that k = rank(γ) = rank(γ1) + rank(γ2) ≥ m + (km). Thus rank(γ1) = m and rank(γ2) = km.

Since γ=mkγ1+kmkγ2 \gamma = {m \over k}{\gamma _1} + {{k - m} \over k}{\gamma _2} , γ has k eigenvalues equal to 1k {1 \over k} and γ1γ2 = 0,

  • γ1 has m eigenvalues equal to 1m {1 \over m} and the others 0,

  • γ2 has km eigenvalues equal to 1km {1 \over {k - m}} and the others 0.

Hence,

  • γ1 has m eigenvalues equal to 1m {1 \over m} , (γ1)A=1mV {({\gamma _1})_A} = {1 \over m}V , (γ1)B=1mW {({\gamma _1})_B} = {1 \over m}W and rank(V ) = rank(W ) = m.

  • γ2 has km eigenvalues equal to 1km {1 \over {k - m}} , (γ2)A=1kmV {({\gamma _2})_A} = {1 \over {k - m}}{V^ \bot } , (γ1)B=1kmW {({\gamma _1})_B} = {1 \over {k - m}}{W^ \bot } and rank(V) = rank(W) = km.

By induction hypothesis, γ1 and γ2 are separable and so is γ and δ.

6.
Summary and Conclusions

In this paper we proved new results for a triad of quantum state types which includes the positive under partial transpose type. We obtained the same upper bound for the spectral radius of these types of quantum states. We then showed that the first two types can be put in the filter normal form and the third type only under some restriction. Finally, we proved that there is a lower bound for their ranks and whenever this lower bound is attained these states are separable. This last result is another consequence of their complete reducibility property. This is sufficient evidence that these states are deeply connected. Moreover, their complete reducibility property is a unifying force connecting and providing many results in entanglement theory.

DOI: https://doi.org/10.2478/qic-2024-0003 | Journal eISSN: 3106-0544 | Journal ISSN: 1533-7146
Language: English
Page range: 40 - 57
Submitted on: Sep 6, 2024
Accepted on: Nov 13, 2024
Published on: Nov 25, 2024
Published by: Cerebration Science Publishing Co., Limited
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2024 Daniel Cariello, published by Cerebration Science Publishing Co., Limited
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License.