Comfyui sdxl refiner. The field of artificial intelligence has witnessed remarkable advancements in recent years, and one area that continues to impress is text-to-image. Comfyui sdxl refiner

 
 The field of artificial intelligence has witnessed remarkable advancements in recent years, and one area that continues to impress is text-to-imageComfyui sdxl refiner 6B parameter refiner

0 workflow. Adjust the workflow - Add in the. Regenerate faces. BRi7X. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). 9 base+refiner, my system would freeze, and render times would extend up to 5 minutes for a single render. ComfyUI_00001_. 2 noise value it changed quite a bit of face. With SDXL, there is the new concept of TEXT_G and TEXT_L with the CLIP Text Encoder. He linked to this post where We have SDXL Base + SD 1. download the SDXL models. A detailed description can be found on the project repository site, here: Github Link. それ以外. Refiner > SDXL base > Refiner > RevAnimated, to do this in Automatic1111 I would need to switch models 4 times for every picture which takes about 30 seconds for each switch. 0 SDXL-refiner-1. You will need a powerful Nvidia GPU or Google Colab to generate pictures with ComfyUI. The node is located just above the “SDXL Refiner” section. FromDetailer (SDXL/pipe), BasicPipe -> DetailerPipe (SDXL), Edit DetailerPipe (SDXL) - These are pipe functions used in Detailer for utilizing the refiner model of SDXL. — NOTICE: All experimental/temporary nodes are in blue. One of the most powerful features of ComfyUI is that within seconds you can load an appropriate workflow for the task at hand. 1. 4. เครื่องมือนี้ทรงพลังมากและ. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. ComfyUI got attention recently because the developer works for StabilityAI and was able to be the first to get SDXL running. Searge-SDXL: EVOLVED v4. safetensors + sd_xl_refiner_0. There’s also an install models button. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. To use the Refiner, you must enable it in the “Functions” section and you must set the “refiner_start” parameter to a value between 0. 0 Base and Refiners models downloaded and saved in the right place, it should work out of the box. 1. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. ComfyUI a model "Queue prompt"をクリック。. i'm probably messing something up im still new to this but you put the model and clip output nodes of the checkpoint loader to the. For me its just very inconsistent. A good place to start if you have no idea how any of this works is the: ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. My research organization received access to SDXL. Google colab works on free colab and auto downloads SDXL 1. {"payload":{"allShortcutsEnabled":false,"fileTree":{"ComfyUI-Experimental/sdxl-reencode":{"items":[{"name":"1pass-sdxl_base_only. 0. conda activate automatic. 9 VAE; LoRAs. SDXL-OneClick-ComfyUI (sdxl 1. r/StableDiffusion. Sign up Product Actions. 0 base checkpoint; SDXL 1. RunDiffusion. Restart ComfyUI. u/EntrypointjipThe two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. safetensors and sd_xl_refiner_1. thibaud_xl_openpose also. I'm not sure if it will be helpful to your particular use case because it uses SDXL programmatically and it sounds like you might be using the ComfyUI? Not totally. 5 models. generate a bunch of txt2img using base. download the SDXL VAE encoder. . FWIW latest ComfyUI does launch and renders some images with SDXL on my EC2. Click “Manager” in comfyUI, then ‘Install missing custom nodes’. Per the announcement, SDXL 1. Andy Lau’s face doesn’t need any fix (Did he??). Subscribe for FBB images @ These configs require installing ComfyUI. But, as I ventured further and tried adding the SDXL refiner into the mix, things. If you want to open it. Fix. But actually I didn’t heart anything about the training of the refiner. Yesterday I woke up to this Reddit post "Happy Reddit Leak day" by the Joe Penna. see this workflow for combining SDXL with a SD1. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. The workflow should generate images first with the base and then pass them to the refiner for further. 9 model images consistent with the official approach (to the best of our knowledge) Ultimate SD Upscaling. Sample workflow for ComfyUI below - picking up pixels from SD 1. Maybe all of this doesn't matter, but I like equations. SDXL-refiner-1. md. 9 ComfyUI) best settings for Stable Diffusion XL 0. Step 1: Download SDXL v1. 1. 0 (26 July 2023)! Time to test it out using a no-code GUI called ComfyUI!. a closeup photograph of a korean k-pop. 6. 05 - 0. 0 base. SDXL in anime has bad performence, so just train base is not enough. In this series, we will start from scratch - an empty canvas of ComfyUI and, step by step, build up SDXL workflows. Thank you so much Stability AI. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. 这才是SDXL的完全体。stable diffusion教学,SDXL1. 5x), but I can't get the refiner to work. 今天,我们来讲一讲SDXL在comfyui中更加进阶的节点流逻辑。第一、风格控制第二、base模型以及refiner模型如何连接第三、分区提示词控制第四、多重采样的分区控制comfyui节点流程这个东西一通百通,逻辑正确怎么连都可以,所以这个视频我讲得并不仔细,只讲搭建的逻辑和重点,这东西讲太细过于. 0 with ComfyUI's Ultimate SD Upscale Custom Node in this illuminating tutorial. 0 (26 July 2023)! Time to test it out using a no-code GUI called ComfyUI!. Note: I used a 4x upscaling model which produces a 2048x2048, using a 2x model should get better times, probably with the same effect. Here are the configuration settings for the SDXL models test: I've been having a blast experimenting with SDXL lately. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. With SDXL I often have most accurate results with ancestral samplers. 0 SDXL-refiner-1. Table of Content. BNK_CLIPTextEncodeSDXLAdvanced. 9. This gives you the ability to adjust on the fly, and even do txt2img with SDXL, and then img2img with SD 1. Not really. The denoise controls the amount of noise added to the image. It is not AnimateDiff but a different structure entirely, however Kosinkadink who makes the AnimateDiff ComfyUI nodes got it working and I worked with one of the creators to figure out the right settings to get it to give good outputs. The ratio usually 8:2 or 9:1 (eg: total 30 steps, base stops at 25, refiner starts at 25 ends at 30) This is the proper way to use Refiner. Upcoming features:Automatic1111’s support for SDXL and the Refiner model is quite rudimentary at present, and until now required that the models be manually switched to perform the second step of image generation. This node is explicitly designed to make working with the refiner easier. Fixed SDXL 0. Table of Content ; Searge-SDXL: EVOLVED v4. Hi buystonehenge, I'm trying to connect the lora stacker to a workflow that includes a normal SDXL checkpoint + a refiner. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. A little about my step math: Total steps need to be divisible by 5. Some custom nodes for ComfyUI and an easy to use SDXL 1. 0, 10 steps on the base sdxl model, and steps 10-20 on the sdxl refiner. Stable Diffusion XL 1. webui gradio sd stable-diffusion stablediffusion stable-diffusion-webui sdxl Updated Oct 28 , 2023. . Look at the leaf on the bottom of the flower pic in both the refiner and non refiner pics. It'll load a basic SDXL workflow that includes a bunch of notes explaining things. comments sorted by Best Top New Controversial Q&A Add a Comment More posts you may like. The refiner model works, as the name suggests, a method of refining your images for better quality. 0 mixture-of-experts pipeline includes both a base model and a refinement model. 20:57 How to use LoRAs with SDXL. google colab安装comfyUI和sdxl 0. The hands from the original image must be in good shape. I'ma try to get a background fix workflow goin, this blurry shit is starting to bother me. Right now, I generate an image with the SDXL Base + Refiner models with the following settings: MacOS: 13. Basic Setup for SDXL 1. Experiment with various prompts to see how Stable Diffusion XL 1. 1s, load VAE: 0. Download the SD XL to SD 1. These files are placed in the folder ComfyUImodelscheckpoints, as requested. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency). 5 refiner tutorials into your ComfyUI browser and the workflow is loaded. py I've successfully run the subpack/install. 5 models for refining and upscaling. Your image will open in the img2img tab, which you will automatically navigate to. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. July 14. Special thanks to @WinstonWoof and @Danamir for their contributions! ; SDXL Prompt Styler: Minor changes to output names and printed log prompt. Skip to content Toggle navigation. The impact pack doesn't seem to have these nodesThis workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with embeddings as well. google colab安装comfyUI和sdxl 0. 5 to SDXL cause the latent spaces are different. 11:29 ComfyUI generated base and refiner images. 5支. source_folder_path = '/content/ComfyUI/output' # Replace with the actual path to the folder in th e runtime environment destination_folder_path = f '/content/drive/MyDrive/ {output_folder_name} ' # Replace with the desired destination path in you r Google Drive # Create the destination folder in Google Drive if it doesn't existMy Links: discord , twitter/ig . Stable Diffusion XL 1. 0 | all workflows use base + refiner. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair assessment. How to get SDXL running in ComfyUI. 35%~ noise left of the image generation. 0 through an intuitive visual workflow builder. To download and install ComfyUI using Pinokio, simply go to and download the Pinokio browser. Control-Lora : Official release of a ControlNet style models along with a few other interesting ones. After that, it goes to a VAE Decode and then to a Save Image node. That’s because the creator of this workflow has the same 4GB. The speed of image generation is about 10 s/it (10241024 batch size 1), refiner works faster up to 1+ s/it when refining at the same 10241024 resolution. Outputs will not be saved. Pastebin. It also lets you specify the start and stop step which makes it possible to use the refiner as intended. I describe my idea in one of the post and Apprehensive_Sky892 showed me it's arleady working in ComfyUI. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. For my SDXL model comparison test, I used the same configuration with the same prompts. With resolution 1080x720 and specific samplers/schedulers, I managed to get a good balanced and a good image quality, first image with base model not very high. Also, use caution with the interactions. I found it very helpful. But if SDXL wants a 11-fingered hand, the refiner gives up. x for ComfyUI. 9. 0 base model. To do that, first, tick the ‘ Enable. 75 before the refiner ksampler. The refiner model. 9 - Pastebin. It's doing a fine job, but I am not sure if this is the best. There are two ways to use the refiner: ; use the base and refiner models together to produce a refined image The refiner removes noise and removes the "patterned effect". The workflow should generate images first with the base and then pass them to the refiner for further. 5. Im new to ComfyUI and struggling to get an upscale working well. Workflows included. 2. I'd like to share Fooocus-MRE (MoonRide Edition), my variant of the original Fooocus (developed by lllyasviel), new UI for SDXL models. Yes, there would need to be separate LoRAs trained for the base and refiner models. . ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail. Stable diffusion-TensorRT安装教程,看完省一张显卡钱!,fooocus完全体2. . A couple of the images have also been upscaled. What a move forward for the industry. How To Use Stable Diffusion XL 1. download the SDXL VAE encoder. 2、Emiを追加しました。Refiners should have at most half the steps that the generation has. Restart ComfyUI. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. 動作が速い. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. x for ComfyUI; Table of Content; Version 4. download the SDXL models. best settings for Stable Diffusion XL 0. ComfyUI, you mean that UI that is absolutely not comfy at all ? 😆 Just for the sake of word play, mind you, because I didn't get to try ComfyUI yet. 0 is here. My ComfyBox workflow can be obtained hereCreated with ComfyUI using Controlnet depth model, running at controlnet weight of 1. Compatible with: StableSwarmUI * developed by stability-ai uses ComfyUI as backend, but in early alpha stage. NOTE: You will need to use linear (AnimateDiff-SDXL) beta_schedule. CR Aspect Ratio SDXL replaced by CR SDXL Aspect Ratio ; CR SDXL Prompt Mixer replaced by CR SDXL Prompt Mix Presets Multi-ControlNet methodology . Working amazing. . refiner_output_01030_. 99 in the “Parameters” section. In the second step, we use a. SDXL refiner:. For those of you who are not familiar with ComfyUI, the workflow (images #3) appears to be: Generate text2image "Picture of a futuristic Shiba Inu", with negative prompt "text, watermark" using SDXL base 0. json. 20:57 How to use LoRAs with SDXL. ComfyUI SDXL Examples. Make the following changes: In the Stable Diffusion checkpoint dropdown, select the refiner sd_xl_refiner_1. png . comfyui 如果有需求之后开坑讲。. refiner_output_01033_. The recommended VAE is a fixed version that works in fp16 mode without producing just black images, but if you don't want to use a separate VAE file just select from base model . SDXL Refiner: The refiner model, a new feature of SDXL; SDXL VAE: Optional as there is a VAE baked into the base and refiner model, but nice to have is separate in the workflow so it can be updated/changed without needing a new model. I mean, it's also possible to use it like that, but the proper intended way to use the refiner is a two-step text-to-img. I’m sure as time passes there will be additional releases. 5s/it, but the Refiner goes up to 30s/it. 9_comfyui_colab (1024x1024 model) please use with: refiner_v0. Includes LoRA. You really want to follow a guy named Scott Detweiler. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. 1. A CheckpointLoaderSimple node to load SDXL Refiner. SD1. Then inside the browser, click “Discover” to browse to the Pinokio script. Always use the latest version of the workflow json file with the latest version of the custom nodes!For example, see this: SDXL Base + SD 1. AI Art with ComfyUI and Stable Diffusion SDXL — Day Zero Basics For an Automatic1111 User. Double click an empty space to search nodes and type sdxl, the clip nodes for the base and refiner should appear, use both accordingly. 0 Base Lora + Refiner Workflow. I found that many novice users don't like ComfyUI nodes frontend, so I decided to convert original SDXL workflow for ComfyBox. 9版本的base model,refiner model. 0. 6B parameter refiner. from_pretrained (. 5 comfy JSON and import it sd_1-5_to_sdxl_1-0. 9 and sd_xl_refiner_0. I know a lot of people prefer Comfy. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. Having issues with refiner in ComfyUI. #ComfyUI is a node based powerful and modular Stable Diffusion GUI and backend. There is an initial learning curve, but once mastered, you will drive with more control, and also save fuel (VRAM) to boot. I was able to find the files online. ai art, comfyui, stable diffusion. 5GB vram and swapping refiner too , use --medvram-sdxl flag when startingSDXL Prompt Styler Advanced: New node for more elaborate workflows with linguistic and supportive terms. 0 almost makes it. 9 testing phase. 9 comfyui (i would prefere to use a1111) i'm running a rtx 2060 6gb vram laptop and it takes about 6-8m for a 1080x1080 image with 20 base steps & 15 refiner steps edit: im using Olivio's first set up(no upscaler) edit: after the first run i get a 1080x1080 image (including the refining) in Prompt executed in 240. Originally Posted to Hugging Face and shared here with permission from Stability AI. install or update the following custom nodes. Custom nodes pack for ComfyUI This custom node helps to conveniently enhance images through Detector, Detailer, Upscaler, Pipe, and more. Do I need to download the remaining files pytorch, vae and unet? also is there an online guide for these leaked files or do they install the same like 2. Despite relatively low 0. 7. 先文成图,再图生图细化,总觉得不太对是吧,而有一个插件能直接把两个模型整合到一起,一次出图,那就是ComfyUI。 ComfyUI利用多重节点,能实现前半段在Base上跑,后半段在Refiner上跑,可以干净利落地一次产出高质量的图像。make-sdxl-refiner-basic_pipe [4a53fd] make-basic_pipe [2c8c61] make-sdxl-base-basic_pipe [556f76] ksample-dec [7dd004] sdxl-ksample [3c7e70] Nodes that have failed to load will show as red on the graph. This workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with embeddings as well. r/linuxquestions. ComfyUI for Stable Diffusion Tutorial (Basics, SDXL & Refiner Workflows) Control+Alt+AI 818 subscribers Subscribe No views 1 minute ago This is a comprehensive tutorial on understanding the. These ports will allow you to access different tools and services. I need a workflow for using SDXL 0. dont know if this helps as I am just starting with SD using comfyui. 1/1. 0 - Stable Diffusion XL 1. 6B parameter refiner. 9 and Stable Diffusion 1. 10. With SDXL I often have most accurate results with ancestral samplers. 1. Does it mean 8G VRAM is too little in A1111? Anybody able to run SDXL on 8G VRAM GPU in A1111 at. 🧨 Diffusers Examples. ( I am unable to upload the full-sized image. You can use this workflow in the Impact Pack to. 0 Base and Refiners models downloaded and saved in the right place, it. I did extensive testing and found that at 13/7, the base does the heavy lifting on the low-frequency information, and the refiner handles the high-frequency information, and neither of them interferes with the other's specialty 1024 - single image 25 base steps, no refiner 1024 - single image 20 base steps + 5 refiner steps - everything is better except the lapels Image metadata is saved, but I'm running Vlad's SDNext. ZIP file. The refiner is trained specifically to do the last 20% of the timesteps so the idea was to not waste time by. 5 renders, but the quality i can get on sdxl 1. . This checkpoint recommends a VAE, download and place it in the VAE folder. Colab Notebook ⚡. 35%~ noise left of the image generation. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. . Cũng nhờ cái bài trải nghiệm này mà mình phát hiện ra… máy tính mình vừa chết một thanh RAM, giờ chỉ còn có 16GB. 9 Base Model + Refiner Model combo, as well as perform a Hires. sdxl 1. How do I use the base + refiner in SDXL 1. Upscale model, (needs to be downloaded into ComfyUImodelsupscale_models Recommended one is 4x-UltraSharp, download from here. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. 20:43 How to use SDXL refiner as the base model. Set the base ratio to 1. You can use the base model by it's self but for additional detail you should move to. Simply choose the checkpoint node, and from the dropdown menu, select SDXL 1. And the refiner files here: stabilityai/stable. Now in Comfy, from the Img2img workflow, let’s duplicate Load Image and Upscale Image Nodes. I've been having a blast experimenting with SDXL lately. Sytan SDXL ComfyUI. Input sources-. How to AI Animate. It's official! Stability. json: 🦒 Drive. 0を発表しました。 そこで、このモデルをGoogle Colabで利用する方法について紹介します。 ※2023/09/27追記 他のモデルの使用法をFooocusベースに変更しました。BreakDomainXL v05g、blue pencil-XL-v0. ) [Port 6006]. Now that you have been lured into the trap by the synthography on the cover, welcome to my alchemy workshop! 现在,你已经被封面上的合成图所吸引. Intelligent Art. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using SDXL. 9 comfyui (i would prefere to use a1111) i'm running a rtx 2060 6gb vram laptop and it takes about 6-8m for a 1080x1080 image with 20 base steps & 15 refiner steps edit: im using Olivio's first set up(no upscaler) edit: after the first run i get a 1080x1080 image (including the refining) in Prompt executed in 240. 5s, apply weights to model: 2. Otherwise, I would say make sure everything is updated - if you have custom nodes, they may be out of sync with the base comfyui version. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. Opening_Pen_880. You know what to do. . Tutorial Video : ComfyUI Master Tutorial - Stable Diffusion XL (SDXL) - Install On PC, Google Colab. Chief of Research. I've successfully downloaded the 2 main files. If you only have a LoRA for the base model you may actually want to skip the refiner or at least use it for fewer steps. Installing ControlNet for Stable Diffusion XL on Windows or Mac. It is a Latent Diffusion Model that uses a pretrained text encoder ( OpenCLIP-ViT/G ). The issue with the refiner is simply stabilities openclip model. Based on my experience with People-LoRAs, using the 1. Models and UI repoMostly it is corrupted if your non-refiner works fine. 0. It would be neat to extend the SDXL dreambooth Lora script with an example of how to train the refiner. All models will include additional metadata that makes it super easy to tell what version is it, if it's a LORA, keywords to use with it, and if the LORA is compatible with SDXL 1. 如下图refiner model生成的图片质量和细节捕捉要好过base model生成的图片,没有对比就没有伤害! . Place VAEs in the folder ComfyUI/models/vae. 0 + LoRA + Refiner With Comfy UI + Google Colab fot FREEExciting news! Introducing Stable Diffusion XL 1. Example workflow can be loaded downloading the image and drag-drop on comfyUI home page. For my SDXL model comparison test, I used the same configuration with the same prompts. To use the Refiner, you must enable it in the “Functions” section and you must set the “End at Step / Start at Step” switch to 2 in the “Parameters” section. I also automated the split of the diffusion steps between the Base and the. 7 contributors. Fooocus and ComfyUI also used the v1. 0, with refiner and MultiGPU support. Final Version 3. Using SDXL 1. If it's the best way to install control net because when I tried manually doing it . Yet another week and new tools have come out so one must play and experiment with them. 0, now available via Github. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. 78. x for ComfyUI. 0 ComfyUI. The Tutorial covers:1. Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool. For example: 896x1152 or 1536x640 are good resolutions. Can anyone provide me with a workflow for SDXL ComfyUI r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. 1min. It MAY occasionally fix. 9" what is the model and where to get it? Reply reply Adventurous-Abies296 After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. Try DPM++ 2S a Karras, DPM++ SDE Karras, DPM++ 2M Karras, Euler a and DPM adaptive. 0! Usage17:38 How to use inpainting with SDXL with ComfyUI. AP Workflow 3. 0 for ComfyUI - Now with support for SD 1. ago. 17:38 How to use inpainting with SDXL with ComfyUI. What I am trying to say is do you have enough system RAM. 0. SDXL Refiner: The refiner model, a new feature of SDXL; SDXL VAE: Optional as there is a VAE baked into the base and refiner model, but nice to have is separate in the workflow so it can be updated/changed without needing a new model. In any case, we could compare the picture obtained with the correct workflow and the refiner. 3. If the refiner doesn't know the LoRA concept any changes it makes might just degrade the results. That is not the ideal way to run it. just using SDXL base to run a 10 step dimm ksampler then converting to image and running it on 1. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Install SDXL (directory: models/checkpoints) Install a custom SD 1. To test the upcoming AP Workflow 6. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. 0_0. If we think about what base 1. It has many extra nodes in order to show comparisons in outputs of different workflows. png files that ppl here post in their SD 1. There are other upscalers out there like 4x Ultrasharp, but NMKD works best for this workflow. Like, which denoise strength when switching to refiner in img2img etc… Can you/should you use. eilertokyo • 4 mo. X etc. Table of Content. +Use Modded SDXL where SD1. Save the image and drop it into ComfyUI. The test was done in ComfyUI with a fairly simple workflow to not overcomplicate things. 9. How to install ComfyUI. 0! This workflow is meticulously fine tuned to accommodate LORA and Controlnet inputs, and demonstrates interactions with embeddings as well. Question about SDXL ComfyUI and loading LORAs for refiner model. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. Part 2 ( link )- we added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. 99 in the “Parameters” section.