Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addressing some TODOs in non-clifford functionality along with resolving #309 #313

Closed
wants to merge 135 commits into from

Conversation

Fe-r-oz
Copy link
Contributor

@Fe-r-oz Fe-r-oz commented Jul 11, 2024

This PR aims to address the TODOs that are provided in the non-clifford functionality and further resolve 309.

Sorry for inconvenience, @Krastanov, I will have to learn about interactive rebase stuff. I wanted to have a chat that I address some of the comments that you had in the form of TODOs because I was checking out the PR 259 and becoming aware of the cool work on the non-clifford. But I was not sure what to do, either ask you as a question, or comments it in the PR, so I decided to have the improvements as a comment. But, as you suggested, better to have PR.

I am going through your implementation and saw some TODOs, so I decided that implementing them will make me more familiar with the functionality to understand the changes that needs to be introduced to resolve Issue 309

Before appling improvement to TODOs:

"""Same as `all(==(0), (a.+b.+c) .% 2)`"""
function _allthreesumtozero(a,b,c) # TODO consider using bitpacking and SIMD xor with less eager shortcircuiting -- probably would be much faster
    @inbounds @simd for i in 1:length(a)
        iseven(a[i]+b[i]+c[i]) || return false
    end
    true
end

function _dictvaltype(dict)
    return eltype(dict).parameters[2] # TODO there must be a cleaner way to do this
end

function apply!(state::GeneralizedStabilizer, gate::PauliChannel)
    dict = state.destabweights
    stab = state.stab
    dtype = _dictvaltype(dict)
    tzero = zero(dtype)
    tone = one(dtype)
    newdict = typeof(dict)(tzero) # TODO jeez, this is ugly
    for ((dᵢ,dⱼ), χ) in dict # the state
        for ((Pₗ,Pᵣ), w) in zip(gate.paulis,gate.weights) # the channel
            phaseₗ, dₗ, dₗˢᵗᵃᵇ = rowdecompose(Pₗ,stab)
            phaseᵣ, dᵣ, dᵣˢᵗᵃᵇ = rowdecompose(Pᵣ,stab)
            c = (dot(dₗˢᵗᵃᵇ,dᵢ) + dot(dᵣˢᵗᵃᵇ,dⱼ))*2
            dᵢ′ = dₗ .⊻ dᵢ
            dⱼ′ = dᵣ .⊻ dⱼ
            χ′ = χ * w * (-tone)^c * (im)^(-phaseₗ+phaseᵣ+4)
            newdict[(dᵢ′,dⱼ′)] += χ′
        end
    end
    for (k,v) in newdict # TODO is it safe to modify a dict while iterating over it?
        if abs(v) < 1e-14 # TODO parameterize this pruning parameter
            delete!(newdict, k)
        end
    end
    state.destabweights = newdict
    state
end

After addressing the TODOs, please let me know are these improvements plausible.

"""Same as `all(==(0), (a.+b.+c) .% 2)`"""
function _allthreesumtozero(a, b, c)
    packed_sum = a .+ b .+ c
    all_even = all(@. iseven(packed_sum))
    return all_even
end

function _dictvaltype(dict)
    return Base.getindex(eltype(dict).parameters, 2)
end

function apply!(state::GeneralizedStabilizer, gate::PauliChannel)
    dict = state.destabweights
    stab = state.stab
    dtype = _dictvaltype(dict)
    tzero = zero(dtype)
    tone = one(dtype)
    newdict = DefaultDict{Tuple{BitVector, BitVector}, Complex{Float64}}(Complex{Float64}(0, 0))
    for ((dᵢ,dⱼ), χ) in dict
        for ((Pₗ,Pᵣ), w) in zip(gate.paulis, gate.weights)
            phaseₗ, dₗ, dₗˢᵗᵃᵇ = rowdecompose(Pₗ, stab)
            phaseᵣ, dᵣ, dᵣˢᵗᵃᵇ = rowdecompose(Pᵣ, stab)
            c = (dot(dₗˢᵗᵃᵇ, dᵢ) + dot(dᵣˢᵗᵃᵇ, dⱼ)) * 2
            dᵢ′ = dₗ .⊻ dᵢ
            dⱼ′ = dᵣ .⊻ dⱼ
            χ′ = χ * w * (-tone)^c * (im)^(-phaseₗ + phaseᵣ + 4)
            newdict[(dᵢ′, dⱼ′)] += χ′
        end
    end
    keys_to_delete = Tuple[]
    for (k, v) in newdict
        if abs(v) < 1e-14
            push!(keys_to_delete, k)
        end
    end
    for k in keys_to_delete
        delete!(newdict, k)
    end
    state.destabweights = newdict
    return state
end

I reran the this test to confirm that there was no error:


julia> for n in 1:5
           i = rand(1:n)
           stab = random_stabilizer(n)
           genstab = GeneralizedStabilizer(stab)
           ket = Ket(stab)
           @test dm(ket) ≈ Operator(stab)
           @test dm(ket) ≈ Operator(genstab)

           pauli = random_pauli(n; nophase=false, realphase=true)
           qo_pauli = Operator(pauli)

           qo_bigtgate = n==1 ? qo_tgate : embed(qo_basis^n, i, qo_tgate)
           bigtgate = embed(n,i, pcT)
           @test qo_bigtgate ≈ Operator(bigtgate)

           for step in 1:10
               # apply!(ket, qo_bigtgate) TODO implement this API
               ket = qo_bigtgate*ket
               apply!(genstab, bigtgate)
               @test dm(ket) ≈ Operator(genstab)
               @test isapprox(expect(qo_pauli, ket), expect(pauli, genstab); atol=1e-10)
           end
       end

Iterative Decoder Attempted

Packages Used:

1. QuantumClifford
2. LDPCDecoders
3. Distances
#how to use
# Create two different instances of the quantum Hamming code
#code1 = QHamming(5)  # r = 5
#code2 = QHamming(7)  # r = 7

# Get the parity check matrices for each instance
#H1 = parity_checks(code1)
#H2 = parity_checks(code2)
Package to be used:  Linear Algebra

# Example usage
#n_i = [2, 3]  # Valid example
#k_i = [1, 2]
#d_i = [1, 1]
#r_i = [1, 1]

#code = HypergraphProduct(n_i, k_i, d_i, r_i)

# Access and use functionalities:
#println("Code block size (n):")
#println(code_n(code))

#println("X parity-check matrix:")
#println(parity_checks_x(code))

#println("Z parity-check matrix:")
#println(parity_checks_z(code))
Create hypergraphproductcode.jl
@Krastanov
Copy link
Member

Thanks! Here are a few more details on what I meant by "single commit = single semantic change" and how it relates to interactive rebase. Do not hesitate to ask if something is unclear! This workflow can be a bit annoying at first, but it really helps with organizing your work longterm.

I can look at the changes that you have made by clicking on your single new commit: 8a0f778

I see three changes in it. They are small so it kinda makes sense for them to be in a single commit, but let's "role play" through a review:

step 1

commits:
1. fixing three TODOs

I review and say "hey could you change something about the second TODO"

step 2

you make that change and we have

commits:
1. fixing three TODOs
2. modifying the second TODO

Then we notice that there is something about the first todo and you fix that too and we end up with

step 3

commits:
1. fixing three TODOs
2. modifying the second TODO
3. modifying the first todo

finally, there is something else that we notice about the second TODO and you fix that and we are left with

step 4

commits:
1. fixing three TODOs
2. modifying the second TODO
3. modifying the first todo
4. modifying the second TODO again

So now if I want to review just the changes to the second TODO, I need to read through commits 1,2, and 4, which makes my work pretty inconvenient. Even worse, it is much more probable that I will miss something this way. I will not be able to think separately about each TODO, I will have to review them all together, risking missing something because I need to divine which change is related to which TODO.

Of course, this particular example is incredibly contrived, the changes are small, etc, so it is actually pretty easy for me to read all three TODOs together. You really do not need to change anything here. I am just using it as an example.

In this example, it would have been better if your commits looked like:

commits before review:
1. fixing the TODOs about yada yada (i.e. the first TODO)
2. fixing the TODOs about yada yada (i.e. the second TODO)
3. fixing the TODOs about yada yada (i.e. the third TODO)
commits after review:
5. modifying the second TODO
6. modifying the first todo
7. modifying the second TODO again

Again, this is just a contrived example, no need to actually do it here. Anyway, this would already be a (hypothetical) improvement. I can now select commits 2, 4, and 6 and look at the changes just to the second TODO. Importantly, at this point you should also run an interactive rebase git rebase -i HEAD~n (if you want to rebase only the last n commits) or git rebase -i master (if you want to rebase all commits in the pull request) and reorder the commits:

commits after rebase (just reordering)
1. fixing the TODOs about yada yada (i.e. the first TODO)
6. modifying the first todo
2. fixing the TODOs about yada yada (i.e. the second TODO)
5. modifying the second TODO
7. modifying the second TODO again
3. fixing the TODOs about yada yada (i.e. the third TODO)

And the commits should not only be reordered during the rebase, some of them should be squashed:

commits after rebase (including squashing)
1. fixing the TODOs about yada yada (i.e. the first TODO)
   squashing into 1: 6. modifying the first todo
2. fixing the TODOs about yada yada (i.e. the second TODO)
   squashed into 2: 5. modifying the second TODO
   squashed into 2: 7. modifying the second TODO again
3. fixing the TODOs about yada yada (i.e. the third TODO)

Now you would have only 3 neatly separated commits, each dedicated to a separate semantic change.

Copy link
Member

@Krastanov Krastanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are a few comments specifically on the code, unrelated on the previous comments on workflow.

Currently I believe the previous versions of these lines were better.

src/nonclifford.jl Outdated Show resolved Hide resolved
end

function _dictvaltype(dict)
return eltype(dict).parameters[2] # TODO there must be a cleaner way to do this
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going backwards. The [] is syntactic sugar for getindex. It does not make sense to go to the less readable getindex.

The original reason for this todo is because it is ugly and dangerous to rely on non-public APIs, specifically the .parameters access.

Looking around the Julia documentation, I see that there is valtype which is probably the correct way to fix this:

julia> valtype(Dict(:a=>1))
Int64

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the following based on valtype works

function _dictvaltype(dict)
    return valtype(dict)
end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

julia> d = DefaultDict(0.0im, (falses(1),falses(1))=>1.0+0.0im)
DefaultDict{Tuple{BitVector, BitVector}, ComplexF64, ComplexF64} with 1 entry:
  ([0], [0]) => 1.0+0.0im

julia> valtype(d)
ComplexF64 (alias for Complex{Float64})

julia> eltype(values(d))
ComplexF64 (alias for Complex{Float64})

julia> return eltype(d).parameters[2]
ComplexF64 (alias for Complex{Float64})

src/nonclifford.jl Outdated Show resolved Hide resolved
Comment on lines 194 to 163
for (k,v) in newdict # TODO is it safe to modify a dict while iterating over it?
if abs(v) < 1e-14 # TODO parameterize this pruning parameter
delete!(newdict, k)
keys_to_delete = Tuple[]
for (k, v) in newdict
if abs(v) < 1e-14
push!(keys_to_delete, k)
end
end
for k in keys_to_delete
delete!(newdict, k)
end
state.destabweights = newdict
state
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this actually necessary? Is it actually dangerous to modify the dictionary keys while iterating over it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. It's not dangerous.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Passed the verdit too soon.

Another guy provide comprehensive reasons of why it's dangerous and suggested that filter! is safe. 'why not use filter!(.., result_dict), which handles the edge case of rehashing correctly for you' and 'deleting like you're doing is definitely dangerous'

src/nonclifford.jl Show resolved Hide resolved
@@ -179,25 +178,29 @@ function apply!(state::GeneralizedStabilizer, gate::PauliChannel)
dtype = _dictvaltype(dict)
tzero = zero(dtype)
tone = one(dtype)
newdict = typeof(dict)(tzero) # TODO jeez, this is ugly
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is worse. Previously the type of the new dictionary was neatly dependent on the type of the input dictionary, ensuring generality. Now you have hardcoded the dictionary type (and you are not guaranteed that the rest of the code would actually work with that type, e.g. if the user is not using Float64).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed.

I asked about whether it is dangerous to modify the dictionary keys while iterating over it. The answer is that it's not!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another guy provide comprehensive reasons (forwarded them) of why it's dangerous and suggested that filter! is safe. 'why not use filter!(.., result_dict), which handles the edge case of rehashing correctly for you' and 'deleting like you're doing is definitely dangerous'

I checked this code passed the test

function apply!(state::GeneralizedStabilizer, gate::PauliChannel)
    dict = state.destabweights
    stab = state.stab
    dtype = _dictvaltype(dict)
    tzero = zero(dtype)
    tone = one(dtype)
    newdict = typeof(dict)(tzero) 

    for ((dᵢ,dⱼ), χ) in dict # the state
        for ((Pₗ,Pᵣ), w) in zip(gate.paulis, gate.weights) # the channel
            phaseₗ, dₗ, dₗˢᵗᵃᵇ = rowdecompose(Pₗ, stab)
            phaseᵣ, dᵣ, dᵣˢᵗᵃᵇ = rowdecompose(Pᵣ, stab)
            c = (dot(dₗˢᵗᵃᵇ, dᵢ) + dot(dᵣˢᵗᵃᵇ, dⱼ)) * 2
            dᵢ′ = dₗ .⊻ dᵢ
            dⱼ′ = dᵣ .⊻ dⱼ
            χ′ = χ * w * (-tone)^c * (im)^(-phaseₗ + phaseᵣ + 4)
            
            # Optionally insert into newdict conditionally
            if abs(χ′) >= 1e-14  # TODO parameterize this pruning parameter
                if haskey(newdict, (dᵢ′, dⱼ′))
                    newdict[(dᵢ′, dⱼ′)] += χ′
                else
                    newdict[(dᵢ′, dⱼ′)] = χ′
                end
            end
        end
    end

    # Filter newdict based on abs value
    filter!(x -> abs(x[2]) >= 1e-14, newdict)

    state.destabweights = newdict
    state
end

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Jul 11, 2024

Kindly please click on the code review suggestions: TODO2, TODO3[0a1c58f] to have a look at the attempt to TODO2, and TODO3 whenever convenient.

I sought some advice on this question: "Is this actually necessary? Is it actually dangerous to modify the dictionary keys while iterating over it?" After discussions, it turned out that this is dangerous andfilter!is safe approach according to folks."

So, I have used the filter! and also tried to parameterize the prune threshold. The default threshold is 1e-14 and user can enter their desired thresold. If they don't enter anything, then the default prune threshold is used. Also, TODO1 is reverted back to what it was as that is much faster.

Hope this is plausible.

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Jul 14, 2024

Kindly please check out a faster approach: 10.6ns to 3.5ns.

julia> using BenchmarkTools

julia> n = 100000; a = rand(Int32, n);b = rand(Int32, n); c = rand(Int32, n);

julia> function _allthreesumtozero(a, b, c)
         @inbounds @simd for i in 1:length(a)
           iseven(a[i] + b[i] + c[i]) || return false
         end
         true
       end
_allthreesumtozero (generic function with 1 method)

julia> function _allthreesumtozero_optimized(a, b, c)
           @inbounds @simd for i in 1:length(a)
               ((a[i] + b[i] + c[i]) & 1) == 0 || return false
           end
           true
       end
_allthreesumtozero_optimized (generic function with 1 method)

julia> println(@benchmark _allthreesumtozero($a, $b, $c))
Trial(10.601 ns)

julia> println(@benchmark _allthreesumtozero_optimized($a, $b, $c))
Trial(3.505 ns)



@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Jul 15, 2024

I used interactive rebase today by following your email (thanks for comprehensive email) that caused a lot of emails today as I was going through rebase. My apologies for that.

I have also reworded the commits to they are easy to follow. Tested the package locally as well.

@Fe-r-oz Fe-r-oz requested a review from Krastanov July 17, 2024 06:04
Copy link
Member

@Krastanov Krastanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, Feroz! Please squash the commits you have made (or clean them up so that one commit is responsible for one task).

For instances, your first commit makes changes to multiple different things that are then undone in other commits, making it difficult to understand what is actually being done:

27953c7

Given that the changes here are fairly small, it is probably doable to just squash your commits. If this was a bigger set of changes, it would be best to be careful in separating the commits by topic.

@Fe-r-oz
Copy link
Contributor Author

Fe-r-oz commented Jul 17, 2024

Closing in favor of #316

When I tried to squash the commits, the head got diverged and somehow it ended up pushing the commits on my master branch. I tried to reset hard. My apologies for this inconvenience!

Thank you very much again for your comprehensive comments and feedback.

pick 867c338 `Destabilizer` now preserves input Stabilizer tableaux
pick 40838f7 `StabMixture` renamed to `GeneralizedStabilizer`
pick 01ea563 rowdecompose was not accounting for input phase
pick 0664b0d `expect` for `GeneralizedStabilizer`
pick e12726d scaffolding for `project!` for `GeneralizedStabilizer`
pick 1f7ad99 Address _dictvaltype, _allthreesumtozero, apply! TODOs
pick f6b6c55 modifying TODO1(_dictvaltype), TODO2(_allthreesumtozero), and TODO3(apply!)
pick 58008c7 fixing tests

# Rebase 0dff482..58008c7 onto 0dff482 (8 commands)
#
# Commands:
# p, pick <commit> = use commit
# r, reword <commit> = use commit, but edit the commit message
# e, edit <commit> = use commit, but stop for amending
# s, squash <commit> = use commit, but meld into previous commit
# f, fixup <commit> = like "squash", but discard this commit's log message
# x, exec <command> = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop <commit> = remove commit
# l, label <label> = label current HEAD with a name
# t, reset <label> = reset HEAD to a label
# m, merge [-C <commit> | -c <commit>] <label> [# <oneline>]
# .       create a merge commit using the original merge commit's
# .       message (or the oneline, if no original merge commit was
# .       specified). Use -c <commit> to reword the commit message.
#
# These lines can be re-ordered; they are executed from top to bottom.

@Fe-r-oz Fe-r-oz closed this Jul 17, 2024
@Fe-r-oz Fe-r-oz deleted the nonstab branch July 17, 2024 17:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants