Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zoo: [[2^j, 2^-j-2,3]] Gottesman /Quantum Hamming Code #238

Closed
wants to merge 126 commits into from

Conversation

Fe-r-oz
Copy link
Contributor

@Fe-r-oz Fe-r-oz commented Feb 28, 2024

Implementation for Gottesman Code

@Fe-r-oz Fe-r-oz closed this Feb 28, 2024
@Fe-r-oz Fe-r-oz reopened this Feb 28, 2024
#how to use
# Create two different instances of the quantum Hamming code
#code1 = QHamming(5)  # r = 5
#code2 = QHamming(7)  # r = 7

# Get the parity check matrices for each instance
#H1 = parity_checks(code1)
#H2 = parity_checks(code2)
Package to be used:  Linear Algebra

# Example usage
#n_i = [2, 3]  # Valid example
#k_i = [1, 2]
#d_i = [1, 1]
#r_i = [1, 1]

#code = HypergraphProduct(n_i, k_i, d_i, r_i)

# Access and use functionalities:
#println("Code block size (n):")
#println(code_n(code))

#println("X parity-check matrix:")
#println(parity_checks_x(code))

#println("Z parity-check matrix:")
#println(parity_checks_z(code))
Create hypergraphproductcode.jl
Copy link
Member

@Krastanov Krastanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a pretty good start, thank you! I would suggest focusing just on implementing QHamming here, leave the rest for other PRs. Add tests and documentation, make sure it actually works. I left some stylistic comments in as well.

src/ecc/codes/code833.jl Outdated Show resolved Hide resolved
src/ecc/codes/code422.jl Outdated Show resolved Hide resolved
src/ecc/decoder_pipeline.jl Outdated Show resolved Hide resolved
src/ecc/decoder_pipeline.jl Outdated Show resolved Hide resolved
src/ecc/decoder_pipeline.jl Outdated Show resolved Hide resolved
src/ecc/decoder_pipeline.jl Outdated Show resolved Hide resolved
src/ecc/codes/QHammingcode.jl Outdated Show resolved Hide resolved
@Krastanov
Copy link
Member

To keep this easier for me, I will mark this back to "draft" stage. When you are ready, please click "resolve" on my comments above and re-request a review.

@Krastanov Krastanov marked this pull request as draft March 1, 2024 20:54
@Fe-r-oz Fe-r-oz changed the title Iterative Decoder + Some Documentation: First Step for creating a Zoo of QEC codes and Decoders QHamming Code Mar 1, 2024
@Krastanov Krastanov marked this pull request as ready for review March 16, 2024 19:56
@Krastanov
Copy link
Member

I ran some quick benchmarks on your routines. Here are plots of performance of the 3, 4, and 5 codes

image

3 and 4 look great, but I am a bit worried that there must be something wrong with 5. They all should have the same slope (the slope depends on the distance which is 3 for all Gottesman codes). I will need to do some debugging...

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 16, 2024

Thanks for refinement. By class, I meant the error code class, but it's a family.

Since the changes v.0.8.22 were not added from my branch on the master, so I thought to add this sentence.

@Krastanov
Copy link
Member

I see you are making a special case for j=3. Why?

Which algorithm exactly did you use? The one in Gottesman's thesis or another one?

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 16, 2024

I see you are making a special case for j=3. Why?

Which algorithm exactly did you use? The one in Gottesman's thesis or another one?

I mentioned this point in my earlier comment when I mentioned about pedagogical details.

In his thesis, Gottesman presenting "possible" checks/"similar possible" checks and that too for even j and odd j. That is >>why the smallest case for even j is [[16,10,3]]. The convenient way is to utilize the structure which automatically satisfies >>the all even and odd j checks for j>=4! The case j==3 is a special case which leads to [[8,3,3]]

I utilized the symmetric structure and developed the algorithm myself that satisfied the checks for j>=4. Thus, defined j==3 as special case. This was convenient and much faster approach. I designed the algorithm by hand as I did draw many tables for j>=4, so I saw a visible symmetric structure. Hence, the algorithm proved correct when it satisfied j==4, and the symmetric structure is visible as well.

Then I read his thesis again, the 4 pages from page 90 to 93, that Gottesman were describing some of the checks (not all), but I ended up implementing all the checks. The fact is that his approach/checks are not complete to design a general algorithm if we consider even j and odd j at the same time. So, had to come up with a general algorithm which includes his checks. This verified not only the results but the theory.

@Krastanov
Copy link
Member

It is really cool that you tried to design it by hand, but at this point I am fairly certain there is a mistake in. Do not get discouraged though, you have done it exactly the way in which you get to learn the most out of your mistake. One word of caution though: if you design something yourself, you have to always be extremely mistrustful of your own implementation.

The issue does not seem to be particularly consistent. Maybe it fails only for odd powers. Here is the test including j=6.

image

It works exactly as expected for j=6.

Now the goal is to figure out what is the error for j=5. One way to try to figure that out is to implement someone else's version of the algorithm. Gottesman spells out exactly all the steps in section 3, paragraph 3-5, of this paper https://arxiv.org/pdf/quant-ph/9604038.pdf

For the moment I will turn this again into a draft. We can not merge it with the j=5 bug.

Also, it is possible there is no bug here. Maybe the bug is in the tool I use to test. I will make that one public later today.

@Krastanov Krastanov marked this pull request as draft March 16, 2024 20:27
@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 16, 2024

For the moment I will turn this again into a draft. We can not merge it with the j=5 bug.

Also, it is possible there is no bug here. Maybe the bug is in the tool I use to test. I will make that one public later today.

Please test for j=7, j=9, odd powers etc. as well.

I am confident that the symmetric structure and algorithm holds otherwise, it would not have worked at all because it has to be spot on to get configuration in j==4 is somewhat complicated symmetry. I drew the tables by hand to verify the algorithm.

I read details of section 3, paragraph 3-5, but the details he provides are not complete to implement a general class for works for all j values. No mention of different checks is mention here but he mention it in his thesis.

The Question is: How do one expand from j==3 to j==4 or 5 onwards without implementing diffferent checks?

That's why he suggests different checks in his thesis. He discuss the even and odd j details in his thesis Table 8.1 Page 91 from pages 90 to 95. Also, there is a 3x7 Hamming matrix inner structure as well which is not mentioned in the details. Gottesman codes are called extended hamming for this very reason. This inner hamming matrix 3x7 structure is missing for j==3 (another point that tells it's a special case).

@Krastanov
Copy link
Member

I probably will not be able to get a check for j>7, these are expensive to run.

I disagree about the paper. It lists the complete algorithm. It is something like this:

consider a fixed j and then set n=2^j (number of physical qubits) and k = n-j-2 (number of logical qubits) and c = j+2 (number of checks)

  1. First, prepare three tables of neatly ordered bitstrings. I will call them tableX, tableY, tableZ. The paper goes into details of how to prepare these tables. These tables have n rows and c columns. Using the nomenclature in the paper, row r of table"E" would be the syndrome that your code is supposed to have if it experiences single qubit error E acting on qubit r.

  2. These tables are enough to figure out what your parity checks are. Each of the c columns of the tables corresponds to one parity check (one row of the stabilizer).

More explicitly: For parity check row R and qubit column Q you have to put the following Pauli:

  • if tableX(row = Q, column = R)==1 and tableZ(row = Q, column=R)==1 then put Y
  • elseif tableX(row = Q, column = R)==1 and tableY(row = Q, column=R)==1 then put Z
  • elseif tableZ(row = Q, column = R)==1 and tableY(row = Q, column=R)==1 then put Y
  • else put Identity

This can be done easily with our library. You do not need to actually compute the three tables in advance, you can do everything on the fly:

parity_checks = zero(Stabilizer, n, c)
for e in 1:2^j # the row of the error tableE
    for b in 3:n # the bit of the integer we are working with (the first two bits are constante "true")
        z = the b-th bit of the e-th row of tableZ
        x = the b-th bit of the e-th row of tableX 
        parity_checks[error_row,bit] = z, x # note that they are flipped
    end
end

By the way, in case you have not seen it before, extracting a bit from an integer can be done as: isone((integer >> (bit-1)) & 0x1)

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

Thanks. I read the appendix of paper that you mentioned https://arxiv.org/pdf/quant-ph/9604038.pdf. The Table V gives Xi, Zi, and Yi. The algorithm produces the same Xi, Zi checks! I will recheck Yi (seem similar though, but I will check). It seems the the last row for odd js is not assumed to be same as even js which was assumed! It means that the last row for odd js as same as even js that's why - which can be fixed!!

Some details in the appendix of same paper:
For s = r + 1, the general case gives 4 blocks with normal disagreements and 2 blocks with reversed disagreements. When r = 3, there are 2 blocks with normal disagreements and 2 blocks with reversed disagreements. When s = a, and j is even, there are 2 blocks with normal disagreements and 0 blocks with reversed disagreements. When s = a and j is odd, there are also 2 blocks with normal disagreements and 0 blocks with reversed disagreements. Because a ≥ 5, we do not need to consider the combined special case

My question is that although I agree that the paper you referenced provided similar starting details as in thesis, but the way Gottesman entertain [16,10,3]] adds new set of different details on top of the details the he provides in the previous paper. Why does he start from the same set of details and expound further in his thesis needs to be investigated?

Kindly please have a look at the pages 90 to 95 as well. I will read both of the papers again, and try to find the hidden incongrueny!

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

I have tried to fix the hidden bug. Please see the graphs

There was assumption that about the last row that I assumed to hold for all cases j (even as well odd). Everything else is the same.

For odd js: This agrees with the j==3 in structure as well! It will hold for all odd js.

parity_checks(Gottesman(5))
+ XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+ ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
+ _X_X_X_X_X_X_X_XZYZYZYZYZYZYZYZY
+ _X_X_X_XZYZYZYZYX_X_X_X_YZYZYZYZ
+ _X_XZYZYX_X_YZYZ_X_XZYZYX_X_YZYZ
+ _XZYX_YZ_XZYX_YZ_XZYX_YZ_XZYX_YZ
+ _YXZXZ_Y_YXZXZ_Y_YXZXZ_Y_YXZXZ_Y

julia> parity_checks(Gottesman(3))
+ XXXXXXXX
+ ZZZZZZZZ
+ _X_XYZYZ
+ _XZY_XZY
+ _YXZXZ_Y

display

display

G(3), G(4), G(5)

display

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

Please see the Fresh Results :)

display

display

@Krastanov
Copy link
Member

Thanks. I read the appendix of paper that you mentioned https://arxiv.org/pdf/quant-ph/9604038.pdf. The Table V gives Xi, Zi, and Yi. The algorithm produces the same Xi, Zi checks! I will recheck Yi (seem similar though, but I will check). It seems the the last row for odd js is not assumed to be same as even js which was assumed! It means that the last row for odd js as same as even js that's why - which can be fixed!!

I am unsure what you are saying -- I do not know whether you are talking about your algorithm or about the algorithm in the paper.

Some details in the appendix of same paper: For s = r + 1, the general case gives 4 blocks with normal disagreements and 2 blocks with reversed disagreements. When r = 3, there are 2 blocks with normal disagreements and 2 blocks with reversed disagreements. When s = a, and j is even, there are 2 blocks with normal disagreements and 0 blocks with reversed disagreements. When s = a and j is odd, there are also 2 blocks with normal disagreements and 0 blocks with reversed disagreements. Because a ≥ 5, we do not need to consider the combined special case

I have not read the appendix. It seems to be about a proof that the algorithm described earlier in the paper is correct. I was just looking at the definition of the algorithm in section 3.

My question is that although I agree that the paper you referenced provided similar starting details as in thesis, but the way Gottesman entertain [16,10,3]] adds new set of different details on top of the details the he provides in the previous paper. Why does he start from the same set of details and expound further in his thesis needs to be investigated?

That is pretty normal when someone writes a thesis that includes one of their paper from a few years earlier. I imagine (I am not sure), that the author simply wanted to provide some additional insight, so they structured the presentation a bit differently and choose a slightly different convention for bit ordering or error ordering.

Kindly please have a look at the pages 90 to 95 as well. I will read both of the papers again, and try to find the hidden incongrueny!

I do not think there is mismatch in what they are doing in the two publications. One might have slightly different conventions from the other, but you already found other papers that also had slightly different conventions.

I am very impressed you were able to find your bug so quickly! However I also need to worry about the maintainability of this piece of code (in a year or two you might not be involved in the project to fix other bugs). Because of this, I prefer we switch to the algorithm as described in Gottesman's paper:

  1. it is a standard reference, not homegrown
  2. it is shorter (probably 20ish lines instead of 75 lines)
  3. it does not have special cases

Presumably the two should give the same answers (up to some reordering). At the end, that is the only way to be sure (or at least "a bit less unsure") that there is no other bug here.

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

I am unsure what you are saying -- I do not know whether you are talking about your algorithm or about the algorithm in the paper. Table V, Page 17.

I am talking about the algorithm that is presented in the paper. The homegrown algorithm produces the same Xi, Zi checks.

Screenshot_select-area_20240317185542

That is pretty normal when someone writes a thesis that includes one of their paper from a few years earlier. I imagine (I am not sure), that the author simply wanted to provide some additional insight, so they structured the presentation a bit differently and choose a slightly different convention for bit ordering or error ordering.

I think that even and odd checks in Pages 92 and 95 are necessary, as in the paper, he does not consider that [[16,10,3]] even case and how its differs from the the initial odd case [[8,3,3]] that he describes in earlier paper prior to thesis. But he goes into detail in his thesis.

I do not think there is mismatch in what they are doing in the two publications. One might have slightly different conventions from the other, but you already found other papers that also had slightly different conventions.

It's not about convention but about the type of the even and odd checks that results in a slightly differeny symmetric structure for even j values and odd j values. He dedicates 3 pages to the checks, meaning they are significant part of the algoritm!. My homegrown algorithm satisfies that.

I am very impressed you were able to find your bug so quickly! However I also need to worry about the maintainability of this piece of code (in a year or two you might not be involved in the project to fix other bugs). Because of this, I prefer we switch to the algorithm as described in Gottesman's paper:

  1. it is a standard reference, not homegrown
  2. it is shorter (probably 20ish lines instead of 75 lines)
  3. it does not have special cases

Presumably the two should give the same answers (up to some reordering). At the end, that is the only way to be sure (or at least "a bit less unsure") that there is no other bug here.

While I understand your concens, Please rest assured as I have faith in the symmetry and intuition, it holds nicely and precisely it does satisfies the Table 8.1 on Page 91 of his thesis! That is a good cross-verification! I will always be available to maintain this code as I would like to keep on contributing to this initiative. It's just more fun to follow an intuitive approach as it makes me excited as it makes the learning more fun!

The slight bug was there in my assumption of combining even and odd check at one place which I took care of. I knew about the assumption so it was not luck.

Since, I approached the design from methodolody of his thesis, it's a standard reference! I don't think the shorter version will be bug free if checks are not included, otherwise he would not have dedicated 3 pages to them in his thesis.
I will always be present to maintain this code. It's just exciting to come up with a intuitive algorithm. I have faith in the homegrown algorithm as it incorporates the theory and literature, including his thesis as I approach the design from symmetric structure point of view.

Gottesman concludes the section as "There may be other symmetries of these codes, as well."

Please be assured!

@Krastanov
Copy link
Member

Please be assured!

Your work is impressive, but this is not how assurances work, especially not in math and computer science. You should never trust your own code. The amount of trust you put in yours is how bugs happen. Verifying one case against a table is not a good verification (as we have seen already).

There is never a way to be completely sure in the correctness of a piece of code, but there are reasonable ways to increase the probability that the code is correct. Here there are two pretty straightforward ways to do it:

  1. Add tests that verify the properties expected from the code. The main one is "every single-qubit error should have a different syndrome". That one is relatively easy and it will actually provide the same type of guarantees :
syndromes = Set([]) # the set automatically removes repeated entries
for error_type in (single_x, single_y, single_z)
    for bit_index in 1:n
        syndrome = comm(parity_check_tableau, error_type(n, bit_index))
        @test any(==(0x1), syndrome) # checking the syndrome is not trivially zero
        push!(syndromes, syndrome)
    end
end
@test length(syndromes) == 3*n # checking that each single-qubit error has a unique syndrome
  1. Implement the very short algorithm from Gottesman's paper that I shared -- which would have the extremely important added benefit that this will be something that other people can also debug in the future, and is much shorter than your current implementation.

Both of these are necessary before this gets merged. Obviously, you work as a volunteer for this and I can not demand anything -- you have already made a very helpful and valued contribution and I am grateful for it. You also shared a lot of resources that were very interesting, and this discussion has certainly been valuable for me. I thank you for all of these contributions!!! I will be happy to finish this on my own if you disagree about the approach I am taking.

@Krastanov
Copy link
Member

And there is a pretty easy way to convince me that the algorithm I am suggesting, the one from the paper, needs these extra checks -- write it and show that it does not pass the tests I just mentioned (the one with the length of the set of syndromes).

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

And there is a pretty easy way to convince me that the algorithm I am suggesting, the one from the paper, needs these extra checks -- write it and show that it does not pass the tests I just mentioned.

Thanks for the two points.

I'll read the paper that you referred and implement it. Can I include these two points in initial weeks of my proposal (to write an algorithm for the paper) as I need time to carefully read the paper and then focus on implementation as you want a exact paper implementation? That will help in more literature survey and spend time with the specific paper as I will spend next month in literature survey.

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

  1. First, prepare three tables of neatly ordered bitstrings. I will call them tableX, tableY, tableZ. The paper goes into details of how to prepare these tables. These tables have n rows and c columns.

Where are exactly the details for generating tables you are referring to? Please specify them from the paper.

If you are talking about this:
After reading the section three: Tables can be generated by generating all possible bitstrings of length n

Select Binary Numbers:
    Pick (j+2)-bit binary numbers for Xi and Zi (for i=1,2,...,ni=1,2,...,n).
    Ensure numbers for Yi​ (XOR of Xi and Zi​) are all different.

For explicit construction of [[8,3,3]] code: he presents
Screenshot_select-area_20240317233438

  1. These tables are enough to figure out what your parity checks are. Each of the c columns of the tables corresponds to one parity check (one row of the stabilizer).

Please specify the tables you are referring.

This can be done easily with our library. You do not need to actually compute the three tables in advance, you can do everything on the fly:

There is a contradiction here: If we don't need to compute the tables, then there is no need to define them in memory either. Also, how does this make sure that size of Stabilizer code matrix correct given the definitions here?

parity_checks = zero(Stabilizer, n, c).

Example: n=2^j, j=3, then n=8, and j+2= 5. This will give 8x5 which is wrong. When the size should be 5 Rows and 8 columns. So, to say ther are c checks seems incorrect?

for j=6, the matix should be: 8x64. The resulting matrix size from this algorithm seems to be incorrect.

I don't think the implementation is 20ish lines.

@Krastanov
Copy link
Member

Where are exactly the details for generating tables you are referring to? Please specify them from the paper.

image
image

Please specify the tables you are referring.

The tables I defined in my comment.

Using the nomenclature in the paper, row r of table"E" would be the syndrome that your code is supposed to have if it experiences single qubit error E acting on qubit r.

This can be done easily with our library. You do not need to actually compute the three tables in advance, you can do everything on the fly:

There is a contradiction here: If we don't need to compute the tables, then there is no need to define them in memory either. Also, how does this make sure that size of Stabilizer code matrix correct given the definitions here?

You do not need to compute the tables in advance, i.e. no need to store them in memory. You do need to loop through the rows and columns, but entries can be computed on the fly.

Example: n=2^j, j=3, then n=8, and j+2= 5. This will give 8x5 which is wrong. When the size should be 5 Rows and 8 columns. So, to say ther are c checks seems incorrect?

For j=3, we have n=8 physical qubits, k=n-j-2=3 logical qubits, c=n-k=5 checks, which means the parity check tableau should indeed have 5 rows and 8 columns.

I don't think the implementation is 20ish lines.

I will write it and upload it here in a couple of days when I get around to it. It is just 20ish lines.

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

The tables I defined in my comment.

Thanks! I mixed the Tables provided in the algorithm that you mentioned namely tableX, tableY, tableZ tables on Page 17. That's why I was confused about where were the tables!

You do not need to compute the tables in advance, i.e. no need to store them in memory. You do need to loop through the rows and columns, but entries can be computed on the fly.

I see, then it will be way less lines since we are Pauli operators based on error type!!

Indeed, it might be very short code them. I thought we needed to compute all the different combinations as were described in table 1. Thanks for clarification!

@Krastanov
Copy link
Member

here is the code

function parity_checks(c::Gottesman)
    j = c.j
    s = j+2
    n = 2^j

    H = zero(Stabilizer, s, n)
    for i in 1:n
        H[1, i] = (true, false)
        H[2, i] = (false, true)
    end
    for i in 0:n-1 # column of H, corresponds to a single qubit error that is detectable)
        Xⁱ = i
        Zⁱ = i÷2
        jeven = j%2 == 0
        ieven = i%2 == 0
        if (jeven && ieven) || (!jeven && ieven && i < n÷2) || (!jeven && !ieven && i ≥ n÷2)
            Zⁱ = ~Zⁱ
        end
        for b in 0:j-1 # which check to consider (row of H), also which bit to extract
            H[s-b,i+1] = isone((Zⁱ>>b)&0x1), isone((Xⁱ>>b)&0x1)
        end
    end
    H
end

and here is the checks for it

H = parity_checks(Gottesman(j))
syndromes = Set([]) # the set automatically removes repeated entries
for error_type in (single_x, single_y, single_z)
    for bit_index in 1:nqubits(H)
        syndrome = comm(H, error_type(nqubits(H), bit_index))
        @assert any(==(0x1), syndrome) # checking the syndrome is not trivially zero
        push!(syndromes, syndrome)
    end
end
@assert length(syndromes) == 3*nqubits(H)

@Krastanov Krastanov mentioned this pull request Mar 17, 2024
@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

Dear Professor

Please see the output of j==4 [[16,10,3]], it does not match with what is presented in his thesis Table 8.1 Page 91

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Mar 17, 2024

In fact, the first two bits of Xi must always be 01 but the out is showing 10.
This is the output from the code you presented:


 parity_checks(Gottesman(4))
+ XXXXXXXXXXXXXXXX
+ ZZZZZZZZZZZZZZZZ
+ X_X_X_X_YZYZYZYZ
+ X_X_YZYZ_X_XZYZY
+ X_YZ_XZYX_YZ_XZY
+ XZ_YXZ_YXZ_YXZ_Y

But there is a fundamental design error according to my understanding of his thesis, M1..Mj , first two digits can't be X. Please see his comments below:

Screenshot_select-area_20240318020645

I think there may be merit to the checks he mentioned in his thesis.

Here is what the 16 10 3 Stabilizer is like:

+ XXXXXXXXXXXXXXXX
+ ZZZZZZZZZZZZZZZZ
+ _X_X_X_XZYZYZYZY
+ _X_XZYZYX_X_YZYZ
+ _XZYX_YZ_XZYX_YZ
+ _YXZ_YXZ_YXZ_YXZ

And there is a pretty easy way to convince me that the algorithm I am suggesting, the one from the paper, needs these extra checks. There is reason why he dedicated 3 pages at least for just checks.

The stabilizers does not match. There are very significant changes in the stabilizer

@Krastanov
Copy link
Member

Hi Feroz! I answer that question in the linked pull request. The stabilizers are equivalent because you can obtain one from the other by performing a row operation.

@Krastanov
Copy link
Member

Closing in favor of #240 which is a slightly updated version of this.

@Krastanov Krastanov closed this Mar 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants