Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding OMEinsumContractionOrders.jl as a backend of TensorOperations.jl for finding the optimal contraction order #185

Open
wants to merge 16 commits into
base: master
Choose a base branch
from

Conversation

ArrogantGao
Copy link

An extension OMEinsumContractionOrdersExt has been added, which provides OMEinsumContractionOrders.jl as a backend of optimizing the contraction order.

We modified the @tensor marco and added a keyword opt_algorithm for selecting a specific optimizer. The default optimizer is NCon, which is the original one used in TensorOperations.jl. After the extension is loaded, the following optimizers are provided:

  • GreedyMethod: greedily select the pair of tensors with the smallest cost to contract at each step
  • TreeSA: tree simulated annealing, based on local search and simulating annealing
  • KaHyParBipartite and SABipartite: regarding the tensor network as hypergraph and then partition the hypergraph into two parts, and then recursively partition each part
  • ExactTreewidth: method based on exact tree width solver TreeWidthSolver.jl, using the tree decomposition with minimal tree width to construct contraction order with optimal complexity
    The former four optimizers are approximate ones which are suitable for large networks, while the latest one is an exact solver, which is only suitable for small networks.

Here is an example of use

using TensorOperations, OMEinsumContractionOrders
A = randn(5, 5, 5, 5)
B = randn(5, 5, 5)
C = randn(5, 5, 5)

@tensor begin
    D1[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = GreedyMethod begin
    D2[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = TreeSA begin
    D3[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = KaHyParBipartite begin
    D4[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = SABipartite begin
    D5[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = ExactTreewidth begin
    D6[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@ArrogantGao
Copy link
Author

ArrogantGao commented Aug 7, 2024

PS: currently, contents of this PR rely on the latest code in the master branch of OMEinsumContractionOrders.jl, so the CI may not work properly online.

@lkdvos
Copy link
Collaborator

lkdvos commented Aug 7, 2024

Hi Xuanzhao,
Thanks for this, it looks really cool!
I will try and make some time for properly reviewing this next week, but it definitely looks like a great addition.
Do you know when the latest code in OMEinsumContractionOrders.jl will be released, so we can run the tests?

@ArrogantGao
Copy link
Author

Thanks a lot.
The new version will be released sooner (JuliaRegistries/General#112591).

Copy link

codecov bot commented Aug 7, 2024

Codecov Report

Attention: Patch coverage is 94.94949% with 5 lines in your changes missing coverage. Please review.

Project coverage is 83.50%. Comparing base (455a69f) to head (d39c6ea).
Report is 25 commits behind head on master.

Files with missing lines Patch % Lines
...xt/TensorOperationsOMEinsumContractionOrdersExt.jl 92.85% 4 Missing ⚠️
src/indexnotation/tensormacros.jl 92.85% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #185      +/-   ##
==========================================
+ Coverage   83.06%   83.50%   +0.43%     
==========================================
  Files          25       29       +4     
  Lines        2150     2546     +396     
==========================================
+ Hits         1786     2126     +340     
- Misses        364      420      +56     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ArrogantGao
Copy link
Author

ArrogantGao commented Aug 9, 2024

I have added some benchmark results, the results are stored at https://github.com/ArrogantGao/OMEinsumContractionOrders_Benchmark.

For some of the small graphs provided by Graph.jl, I tested the complexity by different optimizers, where the vertices of the graph are set as tensors and edges as indices, and the bound dimensions are set as 16, the results are shown below and the log2 time complexity of different solver are listed, which shows that for small tensor networks the new optimizers have a similar performance as NCon.

graphname nv ne NCon ExactTreeWidth GreedyMethod KaHyParBipartite SABipartite TreeSA
bull 5 5 13.586839787961827 13.586839787961827 13.586839787961827 13.586839787961827 13.586839787961827 13.586839787961827
chvatal 12 24 42.58519730958958 42.614824864334814 45.09016476707785 45.09016476707785 45.08752564390082 42.58519730958958
cubical 8 12 25.170081535380607 25.672535838279206 25.170081535380607 25.170081535380607 25.170081535380607 25.170081535380607
diamond 4 5 17.002815015607055 17.002815015607055 17.002815015607055 17.002815015607055 17.002815015607055 17.002815015607055
frucht 12 18 23.322491537597468 23.322491537597468 23.322491537597468 23.322491537597468 23.322491537597468 23.322491537597468
heawood 14 21 33.593041367648866 34.34131467023274 33.64465198851069 33.593157658102996 33.593157658102996 33.59304136764886
house 5 6 17.047123912114024 17.586839787961825 17.047123912114028 17.047123912114024 17.047123912114024 17.047123912114024
housex 5 8 24.169944569113444 25.58507990276711 24.17023805233665 24.169944569113444 24.169944569113444 24.169944569113444
krackhardtkite 10 18 36.1708640318715 37.04747546813754 36.170883586512154 36.17088358651215 36.170883586512154 36.1708640318715
octahedral 6 12 33.087473200638705 33.087473200638705 33.087473200638705 33.087473200638705 33.087473200638705 33.087473200638705
petersen 10 15 29.098042366733186 29.649263195591782 29.09804236673319 29.09819661452721 29.09804236673319 29.09804236673319
sedgewickmaze 8 10 20.25740625933165 21.64577099785494 20.257682480708766 20.257682480708763 20.257682480708766 20.25740625933165
tetrahedral 4 6 21.002815015607055 21.002815015607055 21.002815015607055 21.002815015607055 21.002815015607055 21.002815015607055
truncatedtetrahedron 12 18 23.322491537597468 23.322491537597468 23.322491537597468 23.322491537597468 24.644081593265675 23.322491537597468

@ArrogantGao
Copy link
Author

ArrogantGao commented Aug 9, 2024

For the large tensor network, such as the one mentions in #63, which corresponding to a C60 sphere, Ncon failed to optimize the contraction order in more than 30 mins, and the performance of new optimizers as shown below:

method tc time(s)
GreedyMethod 15.97010589061218 0.006853833
KaHyParBipartite 16.057336033995497 2.534312041
SABipartite 15.691770776604246 0.107419917
TreeSA 15.255323690008138 3.770639667

Copy link
Collaborator

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I think this is a great addition to the package. Our CI complains that there is a formatting error somewhere, you could fix this by running import JuliaFormatter; format("TensorOperations/src/").

As promised, let me also add my "wishlist" as well, note that you should definitely not feel obliged to do all of this/do it by yourself, it's just for completeness that I am adding it.


As we discussed, it would make a lot of sense to have a way of using OMEinsumContractionOrders in the context of ncon, where the dynamic setting gives access to the sizes of the tensors automatically.
I would suggest to hook into this line

function ncon(tensors, network, conjlist=fill(false, length(tensors));
                          order=nothing, output=nothing, kwargs...)
     ...
    tree = contractiontree(network, order)
    return ncon(tensors, network, conjlist, tree; kwargs...)
end

contractiontree(network, ::Nothing) = ncontree(network)
contractiontree(network, ::NTuple{N,A}) where {N,A<:LabelType} = indexordertree(network, order)

function ncon(tensors, network, conjlist, tree; kwargs...)
    # copy previous implementation
end

# in the extension:
function contractiontree(network, eincodeoptimizer::T) where {T<:Eincodeoptimizer}
    # implementation here
end

function ncon(tensors, network, conjlist, tree::NestedEinsum; kwargs...)
    # check if binary contraction tree, otherwise error
    # check that no multi-indices exist
    # implementation based on Einsum tree
end

(Keep in mind that I am not too familiar with the OMEinsum objects, so there might be mistakes in this pseudocode)

This approach has the benefit that it is easily extensible, and would also allow "expert usage" in the sense that if you want to cache the optimized contraction order, you could directly dispatch to the "four argument ncon".
There is definitely some bumps and kinks with the traces etc, but this might make a good first implementation.

Comment on lines 35 to 39
try
@assert TDV <: Number
catch
throw(ArgumentError("The values of the optdata dictionary must be of type Number"))
end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit of a strange construction, try ... catch ... end typically means you would handle the error it throws, not rethrow a new error. You can add messages to the assertion error too:

Suggested change
try
@assert TDV <: Number
catch
throw(ArgumentError("The values of the optdata dictionary must be of type Number"))
end
@assert TDV <: Number "The values of `optdata` must be `<:Number`"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the mistake, now I simply use @asser instead of try...catch...

@@ -85,6 +86,12 @@ function tensorparser(tensorexpr, kwargs...)
val in (:warn, :cache) ||
throw(ArgumentError("Invalid use of `costcheck`, should be `costcheck=warn` or `costcheck=cache`"))
parser.contractioncostcheck = val
elseif name == :opt_algorithm
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here you will have to be a little careful, in principle there is no order to the keyword arguments.
If I am not mistaken, now if the user first supplies opt=(a = 2, b = 2, ...), and only afterwards opt_algorithm=..., the algorithm will be ignored.

My best guess is that you probably want to attempt to extract an optimizer and optdict, and only after all kwargs have been parsed, you can construct the contractiontreebuilder

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much for pointing that out, I did not notice that perviously.
In the revised version, the contractiontreebuilder will be constructed after all other kwargs have been parsed.

Comment on lines 22 to 44
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = GreedyMethod begin
D2[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = TreeSA begin
D3[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = KaHyParBipartite begin
D4[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = SABipartite begin
D5[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = ExactTreewidth begin
D6[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (1 => 5, 2 => 5, 3 => 5, 4 => 5, 5 => 5, 6 => 5, 7 => 5) opt_algorithm = GreedyMethod begin
D7[1, 2, 3, 4] := A[1, 5, 3, 6] * B[7, 4, 5] * C[7, 6, 2]
end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it could be nice if you could somehow check that the algorithms are indeed being used. Perhaps you can enable debug logging for this section, and check the logs for the debug message?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debug message has been added, one can enable debug logging by setting ENV["JULIA_DEBUG"] = "TensorOperationsOMEinsumContractionOrdersExt".
The logging is enabled in tests.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to add that you can maybe verify that this worked using: @test_logs? I am not sure about this however, there might be something weird with evaluation time because it is in a macro.

test/runtests.jl Outdated
Comment on lines 44 to 46
@testset "OMEinsumOptimizer extension" begin
include("omeinsumcontractionordres.jl")
end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the compat with OMEinsum only allows julia >= v1.9, so you should probably also restrict the tests to that version

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the lowest version of julia in ci to 1.9. Is there any reason that 1.8 needed to be supported?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I did not explain properly, what I meant is:

Suggested change
@testset "OMEinsumOptimizer extension" begin
include("omeinsumcontractionordres.jl")
end
if VERSION >= v"1.9"
@testset "OMEinsumOptimizer extension" begin
include("omeinsumcontractionordres.jl")
end
end

I don't think there is any particular reason to support 1.8, but since it works anyways, and until 1.10 becomes the new LTS, I would just keep this supported.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I add this in tests and change the lowest support version back to v1.8

Copy link
Author

@ArrogantGao ArrogantGao Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We managed to make OMEinsumContractionOrders.jl support Julia v1.8 in v0.9.2, I updated the compat so that CI for 1.8 should pass now.

@ArrogantGao
Copy link
Author

Thank you very much for your detailed suggestions, I will revise the code according to the comments asap.

@ArrogantGao
Copy link
Author

ArrogantGao commented Aug 14, 2024

The following changes have been made:

  • name of the default optimizer is changed to be ExhaustiveSearch from NCon
  • the ncon function now can use the TreeOptimizer to optimize the contraction order dynamically
  • debug log has been added for optimaltree function, to show the optimizer used
  • lowest supported version updated as v1.9

I also formatted the code using JuliaFormatter

@ArrogantGao
Copy link
Author

ArrogantGao commented Aug 14, 2024

For the ncon function, I split it into three functions,
With order given, one can use

ncon(tensorlist, indexlist, conjlist; order, output, kwargs...)

with optimizer given, one can use

ncon(tensorlist, indexlist, optimizer, conjlist; output, kwargs...)

and with tree given, one can use

ncon(tensorlist, indexlist, conjlist, tree, output; kwargs...)

The former two function first generate the tree according to the order or optimizer, and then call the third one.

In this way, I will not need to import the data structure from OMEinsumContractionOrders into TensorOperations, and it will be easy for others who want to add more optimizers since they will only need to define their own optimialtree function.

Copy link
Collaborator

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes, this looks great too!
I left some more comments about specific things in the meantime.

@@ -21,7 +21,7 @@ jobs:
fail-fast: false
matrix:
version:
- '1.8' # lowest supported version
- '1.9' # lowest supported version
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- '1.9' # lowest supported version
- '1.8' # lowest supported version

test/runtests.jl Outdated
Comment on lines 44 to 46
@testset "OMEinsumOptimizer extension" begin
include("omeinsumcontractionordres.jl")
end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I did not explain properly, what I meant is:

Suggested change
@testset "OMEinsumOptimizer extension" begin
include("omeinsumcontractionordres.jl")
end
if VERSION >= v"1.9"
@testset "OMEinsumOptimizer extension" begin
include("omeinsumcontractionordres.jl")
end
end

I don't think there is any particular reason to support 1.8, but since it works anyways, and until 1.10 becomes the new LTS, I would just keep this supported.

test/omeinsumcontractionordres.jl Outdated Show resolved Hide resolved
Comment on lines 22 to 44
@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = GreedyMethod begin
D2[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = TreeSA begin
D3[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = KaHyParBipartite begin
D4[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = SABipartite begin
D5[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (a => 5, b => 5, c => 5, d => 5, e => 5, f => 5, g => 5) opt_algorithm = ExactTreewidth begin
D6[a, b, c, d] := A[a, e, c, f] * B[g, d, e] * C[g, f, b]
end

@tensor opt = (1 => 5, 2 => 5, 3 => 5, 4 => 5, 5 => 5, 6 => 5, 7 => 5) opt_algorithm = GreedyMethod begin
D7[1, 2, 3, 4] := A[1, 5, 3, 6] * B[7, 4, 5] * C[7, 6, 2]
end
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to add that you can maybe verify that this worked using: @test_logs? I am not sure about this however, there might be something weird with evaluation time because it is in a macro.

@@ -1,5 +1,6 @@
"""
ncon(tensorlist, indexlist, [conjlist, sym]; order = ..., output = ..., backend = ..., allocator = ...)
ncon(tensorlist, indexlist, optimizer, conjlist; output=..., kwargs...)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we revert the argument order to:

Suggested change
ncon(tensorlist, indexlist, optimizer, conjlist; output=..., kwargs...)
ncon(tensorlist, indexlist, conjlist, optimizer; output=..., kwargs...)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may lead to wrong dispatch since conjlist has default value.
To avoid conflicts, I combine these two cases, given by

ncon(tensorlist, indexlist, [conjlist, sym]; order = ..., output = ..., optimizer = ..., backend = ..., allocator = ...)

where the optimizer is now a kwargs, and we do not allow order and optimizer to be specified together.

optdata = Dict{Any,Number}()
for (i, ids) in enumerate(network)
for (j, id) in enumerate(ids)
optdata[id] = size(tensors[i], j)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, you probably want to use tensorstructure(tensors[i], j, conjlist[i]), as this is part of our interface. In particular, size might not always be defined for custom tensor types.
(

tensorstructure(A, iA, conjA)
)

Copy link
Author

@ArrogantGao ArrogantGao Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks you very much for pointing that out, I did not noticed that the conjlist may change the size of the tensor.

end
end

tree = optimaltree(network, optdata, TreeOptimizer{optimizer}(), false)[1]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, it seems a bit strange to me to keep the optimizer as a Symbol. Would it not make more sense to immediately pass the optimizer itself?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously I want to use that to avoid exporting the structure TreeOptimizer.
Now I export it together with the optimizers

export TreeOptimizer, ExhaustiveSearchOptimizer, GreedyMethodOptimizer, KaHyParBipartiteOptimizer, TreeSAOptimizer, SABipartiteOptimizer, ExactTreewidthOptimizer

so that now we can directly pass the optimizer into the function.
Do you think it's a good idea?

using OMEinsumContractionOrders

# open the debug mode to check the optimization algorithms used
ENV["JULIA_DEBUG"] = "TensorOperationsOMEinsumContractionOrdersExt"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does set this globally though, so if someone had this set to something else you would overwrite that here.
Perhaps you can make use of a with_logger clause to temporarily change the logger to enable debug messages?

Copy link
Author

@ArrogantGao ArrogantGao Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried the @test_logs or with_logger, it seems that it does not work correctly with the macro @tensor, may because of the precompiling.
I removed the global setting, and used @test_logs to test the ncon functions, which use the same optimaltree functions for contraction order optimization as @tensor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants