Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: gradio WebUI #51

Merged
merged 3 commits into from
Jun 20, 2024
Merged

Conversation

cocktailpeanut
Copy link
Contributor

  1. audio fix: explicitly specify the audio codec in util.py, otherwise the video is technically corrupt doesn't play sound, and can't be uploaded online
  2. web ui: gradio web ui
  3. print the current step while running inference
  4. added gradio dependency to requirements.txt

1. audio fix: explicitly specify the audio codec in `util.py`, otherwise the video is technically corrupt and doesn't play sound
2. web ui: gradio web ui
3. print the current step while running inference

gradio
@cocktailpeanut cocktailpeanut mentioned this pull request Jun 19, 2024
Copy link
Member

@AricGamma AricGamma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix the static-check issues.
Commands:

  1. isort $(git ls-files "*.py")
  2. pylint $(git ls-files "*.py") and fix lint issues manually.

scripts/inference.py Show resolved Hide resolved
@AricGamma AricGamma changed the title WebUI + Audio Fix feat: gradio WebUI Jun 19, 2024
@nitinmukesh
Copy link

@cocktailpeanut

I tried by updating the files and getting error. Working fine without updating these files

(venv) C:\sd\hallo>python scripts/app.py
A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
  File "C:\sd\hallo\venv\lib\site-packages\xformers\__init__.py", line 55, in _is_triton_available
    from xformers.triton.softmax import softmax as triton_softmax  # noqa
  File "C:\sd\hallo\venv\lib\site-packages\xformers\triton\softmax.py", line 11, in <module>
    import triton
ModuleNotFoundError: No module named 'triton'
INFO:albumentations.check_version:A new version of Albumentations is available: 1.4.9 (you have 1.4.8). Upgrade using: pip install --upgrade albumentations
INFO:httpx:HTTP Request: GET https://checkip.amazonaws.com/ "HTTP/1.1 200 "
Running on local URL:  http://127.0.0.1:7860
INFO:httpx:HTTP Request: GET http://127.0.0.1:7860/startup-events "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: HEAD http://127.0.0.1:7860/ "HTTP/1.1 200 OK"

To create a public link, set `share=True` in `launch()`.
INFO:httpx:HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\sd\hallo\venv\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 399, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\sd\hallo\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 70, in __call__
    return await self.app(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\fastapi\applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
    raise exc
  File "C:\sd\hallo\venv\lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "C:\sd\hallo\venv\lib\site-packages\gradio\route_utils.py", line 714, in __call__
    await self.app(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\sd\hallo\venv\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "C:\sd\hallo\venv\lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\routing.py", line 75, in app
    await response(scope, receive, send)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\responses.py", line 352, in __call__
    await send(
  File "C:\sd\hallo\venv\lib\site-packages\starlette\_exception_handler.py", line 50, in sender
    await send(message)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\_exception_handler.py", line 50, in sender
    await send(message)
  File "C:\sd\hallo\venv\lib\site-packages\starlette\middleware\errors.py", line 161, in _send
    await send(message)
  File "C:\sd\hallo\venv\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 534, in send
    raise RuntimeError("Response content shorter than Content-Length")
RuntimeError: Response content shorter than Content-Length
WARNING:py.warnings:C:\sd\hallo\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
  warnings.warn(

Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: ./pretrained_models/face_analysis\models\scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0
set det-size: (640, 640)
Traceback (most recent call last):
  File "C:\sd\hallo\venv\lib\site-packages\gradio\queueing.py", line 532, in process_events
    response = await route_utils.call_process_api(
  File "C:\sd\hallo\venv\lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\sd\hallo\venv\lib\site-packages\gradio\blocks.py", line 1928, in process_api
    result = await self.call_function(
  File "C:\sd\hallo\venv\lib\site-packages\gradio\blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\sd\hallo\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\sd\hallo\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "C:\sd\hallo\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "C:\sd\hallo\venv\lib\site-packages\gradio\utils.py", line 832, in wrapper
    response = f(*args, **kwargs)
  File "C:\sd\hallo\scripts\app.py", line 31, in predict
    return inference_process(args)
  File "C:\sd\hallo\scripts\inference.py", line 162, in inference_process
    source_image_lip_mask = image_processor.preprocess(
  File "C:\sd\hallo\scripts\hallo\datasets\image_processor.py", line 124, in preprocess
    face = sorted(faces, key=lambda x: (x["bbox"][2] - x["bbox"][0]) * (x["bbox"][3] - x["bbox"][1]))[-1]
IndexError: list index out of range

@cocktailpeanut
Copy link
Contributor Author

@nitinmukesh this is not the web ui problem but the inference problem (which I did not touch). In my experience this type of error happens depending on which image you use. Not sure which exact problem this is, but some of the issues I've encountered are:

  1. image is webp or some other format not supported by the code (png/jpeg seem to work fine)
  2. something about the image dimension (i don't know what exactly it is)
  3. can't detect face from the image (happens to anime images, etc.)

The solution is to try a normal jpeg/png image with a clear human face

@cocktailpeanut
Copy link
Contributor Author

@AricGamma I have run the lint and all the warnings for app.py are gone. However I still see a bunch of messages for other files, but I can't really do anything about them since I didn't touch any of those files. Hope this update works. If it doesn't, let me know.

@cocktailpeanut
Copy link
Contributor Author

Please fix the static-check issues.
Commands:

  1. isort $(git ls-files "*.py")
  2. pylint $(git ls-files "*.py") and fix lint issues manually.

I just saw that the workflow just ran and it failed so checked what was wrong, and I realized I only ran the second lint command. i misunderstood your comment as just running the second command (not the first one). Just ran the first lint command as well locally and confirmed the errors are gone and re-pushed.

@AricGamma AricGamma merged commit 07ffd49 into fudan-generative-vision:main Jun 20, 2024
1 check passed
AricGamma pushed a commit to AricGamma/hallo that referenced this pull request Jun 22, 2024
* WebUI + Audio Fix

1. audio fix: explicitly specify the audio codec in `util.py`, otherwise the video is technically corrupt and doesn't play sound
2. web ui: gradio web ui
3. print the current step while running inference

gradio

* lint

* update
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants