-
Notifications
You must be signed in to change notification settings - Fork 604
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add _resolve_execution_config
helper function to qnode.py
#6498
base: master
Are you sure you want to change the base?
Conversation
Hello. You may have forgotten to update the changelog!
|
qml.workflow.qnode._resolve_execution_config
_resolve_execution_config
helper function to qnode.py
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #6498 +/- ##
==========================================
+ Coverage 99.28% 99.33% +0.04%
==========================================
Files 452 451 -1
Lines 43059 43022 -37
==========================================
- Hits 42752 42734 -18
+ Misses 307 288 -19 ☔ View full report in Codecov by Sentry. |
pennylane/workflow/qnode.py
Outdated
): | ||
gradient_fn = qml.gradients.param_shift | ||
else: | ||
gradient_fn = QNode.get_gradient_fn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially worth relying on something else so that we can deprecate get_gradient_fn
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah ... I tried to use the new qml.workflow._get_gradient_fn
earlier but it caused some tests to fail. I'm taking a look at it now and trying to make it work. 😄
The problem is with this branch in _get_gradient_fn
,
...
if diff_method == "best":
qn = qml.QNode(lambda: None, device, diff_method=None)
return qml.workflow.get_best_diff_method(qn)()
...
Since get_best_diff_method
has no direct access to the tape provided to _get_gradient_fn
it incorrectly returns backprop
when it should be parameter-shift
for finite shots. I tried setting the tape with qn._tape = tape
but that didn't work.
Any ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the best way forward would be to modify the qml.workflow._get_gradient_fn
to not rely on qml.workflow.get_best_diff_method
and instead replace that block of code with the necessary logic from the latter,
...
if diff_method == "best":
config = _make_execution_config(None, "best")
if device.supports_derivatives(config, circuit=tape):
new_config = device.preprocess(config)[1]
return new_config.gradient_method
if tape and any(isinstance(o, qml.operation.CV) for o in tape):
return qml.gradients.param_shift_cv
return qml.gradients.param_shift
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you just need to do something like pass shots from the tape in when calling the output of get_best_diff_method
inside _get_gradient_fn
and that should resolve.
I.e. something along the lines of return qml.workflow.get_best_diff_method(qn)(shots=tape.shots)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, I can try that.
Although, this might actually be an improvement to _get_gradient_fn
since get_best_diff_method
is designed to be user-friendly - returning strings rather than transform functions. Seems kind of weird for an internal developer function to rely on a user-designed function?
By extracting the relevant code from get_best_diff_method
, we could simply return the transform function directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can see that it might be nice if _get_gradient_fn
returned a function instead of a string. I don't have a strong opinion either way - since we are allowing users to pass in diff_method
, we are going to have a mix of strings and functions to define our gradients up to some point, and I think at least in our current implementation we still have a mix at this point.
pennylane/workflow/execution.py
Outdated
"tensorflow not found. Please install the latest " # pragma: no cover | ||
"version of tensorflow to enable the 'tensorflow' interface." # pragma: no cover |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we're compatible with the latest version of tensorflow
.
But this is an internal function that should only ever be used if we've already established that the interface is tensorflow
, so do we need to raise a "nice" error here instead of the standard import error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you mean but I was just thinking of the edge case where the user doesn't have tensorflow
installed in their environment and they choose to use interface="tf"
. I think having a response akin to the jax
error would be helpful.
I don't think we're compatible with the latest version of tensorflow.
Good point, I can update the error message to,
raise qml.QuantumFunctionError( # pragma: no cover
"tensorflow not found. Please install the latest " # pragma: no cover
"version of tensorflow supported by Pennylane " # pragma: no cover
"to enable the 'tensorflow' interface." # pragma: no cover
) from e # pragma: no cover
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the error that is raised if someone does that? I would hope we hit a user-facing error sooner than that (pretty much immediately), but I guess this is quite early in the pipeline. Either way, suggestion above looks good to me!
pennylane/workflow/qnode.py
Outdated
): | ||
gradient_fn = qml.gradients.param_shift | ||
else: | ||
gradient_fn = QNode.get_gradient_fn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you just need to do something like pass shots from the tape in when calling the output of get_best_diff_method
inside _get_gradient_fn
and that should resolve.
I.e. something along the lines of return qml.workflow.get_best_diff_method(qn)(shots=tape.shots)
|
||
raise qml.QuantumFunctionError( | ||
f"Differentiation method {diff_method} must be a gradient transform or a string." | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially we can move the if isinstance(diff_method, str)
block below:
raise qml.QuantumFunctionError(
f"Differentiation method {diff_method} not recognized. Allowed "
f"options are {tuple(get_args(SupportedDiffMethods))}."
)
That way we aren't left with a hanging ending to the function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the whole purpose of _get_gradient_fn
is to be a step inside of _resolve_execution_config
, we can update the call signature to:
def _resolve_diff_method(initial_config, device, tape):
Then we can determine whether or not the device supports the diff method for the full initial config, and not just the temporary version created for the time being.
We could even update it to be:
def _resolve_diff_method(initial_config, device, tape) -> "ExecutionConfig":
Where it does the device setup, and if the device doesn't support the diff method, it fills in gradient_method
.
Context:
The goal is to streamline the process of updating the execution configuration with non-device-specific settings. This approach consolidates the setup and validation of the execution configuration in one spot in the code, before calling
qml.execute
inqnode.py
.Description of the Change:
This refactor organizes the setup of the initial
config
by moving it into a dedicated helper function. It also includes the setup and validation code for handling theMCMConfig
.Benefits: Improves code cleaniness in
qnode.py
and allows for better testing of internal pipeline.Possible Drawbacks: None
[sc-72086]