Skip to content

[Bug]: IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade #2911

@ArteSelecta

Description

@ArteSelecta

Checklist

  • The issue has not been resolved by following the troubleshooting guide
  • The issue exists on a clean installation of Fooocus
  • The issue exists in the current version of Fooocus
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

By starting Fooocus I get the message written in the object relating to the new version of Gradio. I updated but nothing changed!

Steps to reproduce the problem

source fooocus_env/bin/activate
python3 entry_with_update.py --always-download-new-model

What should have happened?

Gradio is updated to the latest version but keeps telling me I'm still using Gradio 3.41.2

What browsers do you use to access Fooocus?

Google Chrome

Where are you running Fooocus?

None

What operating system are you using?

Pop_OS!

Console logs

Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--always-download-new-model']
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.3.1
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Total VRAM 5931 MB, total RAM 64033 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: /home/lucky/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/home/lucky/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [/home/lucky/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/lucky/Fooocus/models/checkpoints/juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.40 seconds
Started worker with PID 12540
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865

Additional information

Fooocus apparently continues to work well

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingwontfix / cantfixThis will not be worked on

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions