Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX model optimization failed. #1405

Open
prashant-saxena opened this issue Oct 5, 2024 · 0 comments
Open

ONNX model optimization failed. #1405

prashant-saxena opened this issue Oct 5, 2024 · 0 comments

Comments

@prashant-saxena
Copy link

prashant-saxena commented Oct 5, 2024

Bug 1

After installing latest version using pip from GitHub, type "olive --help & olive -h" at the command prompt:

Traceback (most recent call last):
  File "runpy.py", line 196, in _run_module_as_main
  File "runpy.py", line 86, in _run_code
  File "D:\projects\enhance\.venv\Scripts\olive.exe\__main__.py", line 7, in <module>
  File "D:\projects\enhance\.venv\lib\site-packages\olive\cli\launcher.py", line 55, in main
    parser.print_help()
  File "argparse.py", line 2550, in print_help
  File "argparse.py", line 2534, in format_help
  File "argparse.py", line 283, in format_help
  File "argparse.py", line 214, in format_help
  File "argparse.py", line 214, in <listcomp>
  File "argparse.py", line 214, in format_help
  File "argparse.py", line 214, in <listcomp>
  File "argparse.py", line 542, in _format_action
  File "argparse.py", line 530, in _format_action
  File "argparse.py", line 626, in _expand_help
TypeError: unsupported operand type(s) for %: 'tuple' and 'dict'

Bug 2 Reproduce

from olive.workflows import run as olive_run

config = {
"input_model":{
    "type": "ONNXModel",
    "model_path": "models/codeformer.onnx",
    "inference_settings": {
        "input_names": ["x", 'w'],
        "input_types": ["float", "double"],
        "input_shapes": [[0, 3, 512, 512], [1]],
        "output_names": ["y"],
        "dynamic_axes" : {"x": {"0": "batch_size"}, "y": {"0": "batch_size"}}
    	},
    },
    "systems": {
        "LocalSystem": {
            "type": "LocalSystem",
            "accelerators": [ { "device": "gpu", "execution_providers": [ "OpenVinoExecutionProvider" ] } ]
        }
    },
    "passes": {
        "xxx": {
            "type": "DynamicToFixedShape",
            "input_name": ["x"],
            "input_shape": [[1, 3, 512, 512]]
        },
        "yyy": {
            "type": "OnnxFloatToFloat16"
        },
    },
    "engine": {
    	"log_severity_level": 0
    }
}

olive_run(config)

Log

[2024-10-07 07:47:34,983] [INFO] [accelerator_creator.py:96:_fill_accelerators] There is no any accelerator specified. Inferred accelerators: [AcceleratorConfig(device='cpu', execution_providers=['CPUExecutionProvider'])]

How do I fix accelerator?

Error when loading and checking the output model

onnx.checker.check_model(onnx_model)
  File "D:\enhance\.venv\lib\site-packages\onnx\checker.py", line 179, in check_model
    C.check_model(
onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input '/TopK_input_cast_0' of node: 
name: /TopK OpType: TopK
 is not output of any previous nodes.

There are two more outputs in the model 'logits' and 'style_feat', I'm not using them. Is there any pass to remove these outputs?

If a model has two float inputs, 'x' and 'y', how do you make 'x' as float16 and keep 'y' as float?

On my Dell laptop with Intel Integrated GPU i620, OpenVinoExecutionProvider is giving the best performance with device type as GPU. To improve the performance further, I would like to try model optimization using Olive.

jambayk pushed a commit that referenced this issue Oct 17, 2024
## Describe your changes

Fix cli argparse help format: tuple -> str. Now `olive -h` works
automatically.

```
➜ Olive git:(xiaoyu/cli) olive -h
usage: olive

positional arguments:
  {run,auto-opt,capture-onnx-graph,finetune,generate-adapter,convert-adapters,quantize,tune-session-params,configure-qualcomm-sdk,manage-aml-compute,shared-cache}
    run                 Run an olive workflow
    auto-opt            Automatically optimize the performance of the input model.
    capture-onnx-graph  Capture ONNX graph using PyTorch Exporter or Model Builder from the Huggingface model or
                        PyTorch model.
    finetune            Fine-tune a model on a dataset using peft. Huggingface training arguments can be provided
                        along with the defined options.
    generate-adapter    Generate ONNX model with adapters as inputs. Only accepts ONNX models.
    convert-adapters    Convert lora adapter weights to a file that will be consumed by ONNX models generated by
                        Olive ExtractedAdapters pass.
    quantize            Quantize the input model
    tune-session-params
                        Automatically tune the session parameters for a given onnx model. Currently, for onnx model
                        converted from huggingface model and used for generative tasks, user can simply provide the
                        --model onnx_model_path --hf_model_name hf_model_name --device device_type to get the tuned
                        session parameters.
    configure-qualcomm-sdk
                        Configure Qualcomm SDK for Olive
    manage-aml-compute  Create new compute in your AzureML workspace
    shared-cache        Shared cache model operations

options:
  -h, --help            show this help message and exit
```
## Checklist before requesting a review
- [ ] Add unit tests for this change.
- [ ] Make sure all tests can pass.
- [ ] Update documents if necessary.
- [ ] Lint and apply fixes to your code by running `lintrunner -a`
- [ ] Is this a user-facing change? If yes, give a description of this
change to be included in the release notes.
- [ ] Is this PR including examples changes? If yes, please remember to
update [example
documentation](https://github.com/microsoft/Olive/blob/main/docs/source/examples.md)
in a follow-up PR.

## (Optional) Issue link
#1405
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant