Comfyui lora strength reddit 9> where the number is the strength. It gives very good results at around 0. Don't know how comfy behaves if it's not the case but you have to have a lora Thursday m that's compatible with your checkpoint. Showing the LoRA stack connected to other nodes. Used the same as other lora loaders (chaining a bunch of nodes) but unlike the others it has an on/off switch. Beneath the main part there are three modules: LORA, IP-adapter and controlnet. The only reason I'm needing to get into actual LoRA training at this pretty nascent stage of its usability is that Kohya's DreamBooth LoRA extractor has been broken since Diffusers moved things around a month back; and the dev team are more interested in working on SDXL than fixing Kohya's ability to extract LoRAs from V1. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. 75, and and an end percent of 0. Just inpaint her face with lora + standard prompt. I follow his stuff a lot trying to learn. I've made a few loras now of a person (ball park about 70 photos each). 0 and 2. X or something. 0 to 1. I have so far just tested with base SDXL1. I have tried sending the float output values from scheduler nodes into the input values for motion_scale or lora_strength but I get errors when I run the workflow. This is unnecessary but hardcoding a 1. Reply reply aerilyn235 Isn't that the truth of the day. Ex. Also, I’ve had bad luck with using the LCM LoRA from the Additional Networks plug-in. The image comes out looking dappled and fuzzy, not nearly as good as ddim for example. The only way I've found to not use a LORA, other than disconnecting the nodes each time, is to set the model strength to 0. Previously I used to train LoRAs with Kohya_ss, but I think it would be very useful to train and test LoRAs directly in ComfyUI. 0 to +1. Right: Increased smooth step strength No lora applied, scaled down 50%. Save some of the information (for example, name of LoRA with associated activation word) into a text file, which I can search easily. 8> . This may need to be adjusted on a drawing to drawing basis. 0. 5 you can easily prompt a background and other things. My best lora tensorfile hit a lossrate at around 0. 8 without reading the settings. People have been extremely spoiled and think the internet is here to give away free shit for them to barf on - instead of seeing it as a collaboration between human minds from different economic and cultural spheres binding together to create a global culture that elevates people and where we give Start with a full 1. The extension also provides XY plot components to better evaluate merge settings. Lora weights I typically divide in half, and tweak from that starting point. But I've seen it enhance features with some loras. 0 LoRA strength and adjust down if you need. ComfyUI only allows stacking LoRA nodes, as far as I know. And above all, BE NICE. Reddit user, _roblaughter_, discovered a severe security issue in the ComfyUI_LLMVISION node created by user u/AppleBotzz. 7>, scared, looking own, panic, screaming, a portrait of a ginger teen, blue eyes, short bob cut, ginger, black winter dress, fantasy art, 4K resolution, unreal engine, high resolution wallpaper, sharp focus” Final version. . Please share your tips, tricks, and… I can already use wildcards in ComfyUI via Lilly Nodes, but there's no node I know of that makes it possible to call one or more LoRAs from a text prompt. Take outputs of that Load Lora node and connect to the inputs of the next Lora Node if you are using more than one Lora model. using the same ratios/weights,etc. Not to mention ComfyUI just straight up crashes when there are too many options included. Even though it's a slight annoyance having to wire them up, especially more than one - that does come with some UI validation and cleaner prompts. Sorry if this has been asked before but i can't seem to find answers. this prompt was 'woman, blonde hair, leather jacket, blue jeans, white t-shirt'. I use it in the 2nd step of my workflow where I create the realistic image with the control net inputs. However, the prompt makes a lot of difference. 2) or something? I'm new to ComfyUI and using stable diffusion in general. Atleast for me it was like that, but i can't say for you since we don't have the workflow you use We would like to show you a description here but the site won’t allow us. 4-0. Its as if anything this lora is included in gets corrupted, regardless of strength? Then split out the images into seperate png's and use to create a Lora in Kohya_SS (optionally can upscale each image first with a low denoise strength for extra detail) Once the Lora was trained on the first 10 images, i went back into stable diffusion and created 24 new images using the Lora, at various angles and higher resolution (between Hello, I am new to stable diffusion and I tried fine-tuning using LoRa. 5 model I can then use it with many different other checkpoints within the WebUI to create many different styles of the face. , set your lora loader to allow strength input, and just direct that type of scheduling prompt to the strength of the Lora, it works just with the adjusted code in the node. Please share your tips, tricks, and… The leftmost column is only the lora, Down: Increased lora strength. Tested a bunch of others of that author, now also in comfyui, and they all produce the same image, no matter the strength, too. I usually txt2img at CFG 5-7, and inpaint around 3-5. For a 1. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. It would clutter less the workflow. Using only the trigger word in the prompt, you cannot control Lora. So to replicate the same workflow in ComfyUI, insert a LoRA, set the strength via the loader's slider, and do not insert anything special in the prompt. It works for all Checkpoints, Loras, Textual Inversionss, Hypernetworkss, and VAEs. It then applies ControlNet (1. I load the models fine and connect the proper nodes, and they work, but I'm not sure how to use them properly to mimic other webuis behavior. 4 LoRA to 0. Ksampler takes only one model. 18K subscribers in the comfyui community. 7> which would use the LCM at 70% strength. 0 but they kind of work at 2. Is this workflow at all possible in ComfyUI? I want to automate the weight adjustment of the Lora weight, I would like to generate multiple images for every 0. 0+1. Checkpoints --> Lora. Never set Shuffle, Normal BAE to high or it is like an inpainting. Jul 29, 2023 · Eventually add some more parameter for the clip strength like lora:full_lora_name:X. The intended way to use SDXL is that you use the Base model to make a "draft" image and then you use the Refiner to make it better. 0 and the impact should be obvious. Right click on your LoRa loader node then > convert widget to input > lora name add a primitive node and plug it to the lora name input then on control after generate chooe randomize. Specifically changing the motion_scale or lora_strength values during the video to make the video move in time with the music. Share Sort by: Also the IPAdapter strength sweet spot seems to be between 0. Download this extension: stable-diffusion-webui-composable-lora A quick step by step for installing extensions: Click the Extensions tab within the Automatic1111 web app> Click Available sub-tab > Load from > search composable LORA > install > Then restart the web app and reload the UI Until then, I've light a candle to the gods of Copy & Paste and created the Lora vs Lora plot in a workflow. In A1111, each LoRA you are using should have an entry for it in the prompt box. Option a) t2i + low denoising strength + controlnet tile resample b) i2i inpaint + controlnet tile resample (if you want to maintain all texts) We would like to show you a description here but the site won’t allow us. 1) using a Lineart model at strength 0. g. e. 000 means it is disabled and will be bypassed. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5>, and play around with the weight numbers until it looks how you want. CLIP Strength: Most LoRAs don't contain any text token training (classification labels for image concepts in the LoRA data set). 5 Steps: 4 Scheduler: LCM On a1111 the positive "clip skip" value is indicated, going to stop the clip before the last layer of the clip. The reason you can tune both in ComfyUI is because the CLIP and MODEL/UNET part of the LoRA will most likely have learned different concepts so tweaking them separately As with lots of things in ComfyUI there are multiple ways to do this. 0, and some may support values outside that range. And some LoRA do not play well with some checkpoint models. I’m pretty sure the LoRA file has to go under models/lora to work in a prompt instead of the Additional Networks LoRA Welcome to the unofficial ComfyUI subreddit. 23K subscribers in the comfyui community. Tried a few combinations but, you know, ram is scarce while testing. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. dose this "<lora:easynegative:1. 5 8 steps CFG LoRA strength: 1. 5, from then on you can get very impressive results by playing with the strength. Comfy does the same just denoting it negative (I think it's referring to the Python idea that uses negative values in array indices to denote last elements), let's say ComfyUI is more programmer friendly; then 1(a111)=-1(ComfyUI) and so on (I mean the clip skip values and no LoRA has no concept of precedence (where it appears in the prompt order makes no difference), so the standard ComfyUI workflow of not injecting them into prompts at all actually makes sense. I've developed a comfyui extension that offers a wide range of LoRA merge techniques (including dare). Dec 17, 2024 · Rescale the LoRA Strength Finally test again the LoRA, consider that it might need a higher strength now. 8> would set that LoRA to 0. The Lora has improved with the step increases. You often need to reduce the CFG to let the system make it “nice” … at the cost of potentially losing the Lora “model” side of things When you mix Lora’s this can get compounded … though it depends on the type of Lora’s. 2 change in weight, so I can compare them and choose the best one, I use Efficiency Nodes for ComfyUI Version 2. 6 strength value. So if you have different LORAs applied to the base model, each pipeline will have a different model configuration. appreciated - thanks! And certainly, though I need to put more thought into that for SDXL and how I'll differentiate it from what's already out there and from ClassipeintXL. I tried IPAdapter, but if I set the strength too high, it tries to be too close to the original image. When you use Lora stacker, Lora weight & Clip weight of the Lora are the same, when you load a lora in the lora loader, you can use 2 differents values. <lora:LORANAMEHERE:0. So using the same type of prompts like he is doing for pw_a, pw_b, etc. Idk if it is done like this but what I would do is generate few, let's say 6 images with the same prompt and LoRA intensity for each methodology to test and ask a random five people to give scores to each group of six. There you have it! I hope this helps 5. The negative has a Lora loader. You can also decrease the lenght by reducing the batch size (number of frames) regardless what says the prompt schedule (useful for doing quick tests) Has anyone gotten a good simple ComfyUI workflow for 1. BTW, SDXL LoRAs do not work in non-SDXL and the opposite also happens. If I have a lora at 0. In your case, i think it would be better to use controlnet and face lora. But I can’t seem to figure out how to pass all that to a ksampler for model. 75) to weaken it in relation to the trigger word. The LORA modifies the base model. 8 for example is the same as setting both strength_model and strength_clip to 0. What am I doing wrong? I played around a lot with lora strength, but the result always seems to have lot of artifacts. 8 strength. I've settled on simple prompts that include some of face and body features I'm aiming for. 5-1. Simply adding detail to existing crude structures is the easiest and I mostly only use LORA. For the LoRA I prefer to use one that focuses on lineart and sketches, set to near full strength. The "Model" output of the last Load Lora node goes to the "Model" input of the sampler node. It will pick a random LoRa each time you queue a prompt to generate. 3 weight and I have a trigger word at 0. When you have a Lora that accepts float strength values between -1 and 1, how can you randomize this for every generation? There is the randomized primitive INT and there are math nodes that convert integers to floats. I'm still experimenting and figuring out a good workflow. Whereas a single wildcard prompt can range from 0 LoRAs to 10. Since I've 'got' you here and we're on the subject, I'd like your take on a small matter: 89 votes, 24 comments. For example I have portrait of someone, and want to put them into different scenes like playing basketball, or driving a car. 000 and ControlNet strength 0. Reply reply AmericanKamikaze Lmao. 5 lora and for a SDXL model you need a SDXL lora. The Lora works in A1111. Currently I have the Lora Stacker from efficiency nodes, but it works only with the propietary Efficient KSampler node, and to make it worse the repository has been archived on Jan 9, 2024, meaning it could permanently stop working with the next comfyui update any minute now. After some investigation, I found that Forge seems to ignore the Lora strength. As you can see, it's not simply scaling strength, the concept can change as you increase the smooth step. 49 votes, 21 comments. As usually animateDiff has trouble keeping consistency, i tried making my first Lora. Generate your images! Hope you liked this guide! Edit: Added more example images. A little late to this post, but I have the solution for Automatic1111 users. Stopped linking my models here for that very reason. The process was: Create a 4000 * 4000 grid with pose positions (from open pose or Maximo etc), then use img2img in comfyUI with your prompt, e. So to use them in ComfyUI, load them like you would any other LoRA and change the strength to somewhere between 0. A lot of people are just discovering this technology, and want to show off what they created. For the Controlnet, I use t2i-adapter_xl_sketch, initially set to strength of 0. I cannot find settings that work well for SDXL with LCM Lora. You adjust the weights in the prompt, like: <lora:catpics_lora:0. I can however update the strength field as one would expect. Seems like it's busted. - At the latest in the second step the golden CFG must be used. Even just SDXL with celebs doesn't seem to work that well, but then I don't generate boring portrait photos, it's all more "involved" and complex and celeb loras often ruin whole result, I'll have otherwise perfect image, right position, right composition, details, all, add celeb lora Adding the LoRA stack node in ComfyUI Adding the LoRA stack node in ComfyUI. It has a clear effect in a minimal workflow (like the default one), but only if you set the strength to relatively high values, like the hand-fix had little to no effect below 4. It remains as "undefined" I am able to drag in sample files like the videos from the CivitAI page and it will update the "Lora_name" field as expected, but it will not run even if I have that LORA loaded. What am I doing wrong here in ComfyUI? The Lora is an Asian woman Now I want to use a video game character lora. Please share your tips, tricks, and… Assuming both Lora's have trigger words, the easiest thing to try is to use the BREAK keyword to separate character descriptions, with each sub prompt containing a different trigger word(it doesn't matter where in the prompt the Loras are called though). Once it's working, then start fiddling with the resolution. 75. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. However, the image generated with Forge is quite different from the original A1111 webUI. Also, I heard at some point that the prompt weights are calculated differently in comfyui, so it may be that the non-lora parts of the prompt are applied more strongly in comfy than a1111. We would like to show you a description here but the site won’t allow us. this will then be replaced by the next on your list when you run the script. 5 for converting an anime image of a character into a photograph of the same character while preserving the features? I am struggling hell just telling me some good controlnet strength and image denoising values would already help a lot! Not sure why one can't lower the strength of the trigger word itself if they do need to add extra stuff to a prompt. Adding a Lora that was trained on anime and simple 2d drawing … with an “add detail” Lora …. 5 model you need a 1. high clip strength makes your prompt activate the features in the training data that were captioned, and also the trigger word. The leftmost column is only the lora, Down: Increased lora strength. To test, render once with 1024x1024 at strength 0. 0 (I should probably have put the clip_strength to 0 but I did not) sampler: Euler scheduler: Normal steps: 16 My favorite recipe was with the Restart KSampler though, at 64 steps, but it had its own limitations (no SGM_Uniform scheduler for AnimateDiff). Some may work from -1. But it's not really predictable how it's changing. 2) or something? If the clip is the text or trigger word, isn't it the same to put (loratriggerword:1. In practice, both are usually highly correlated, but there are situations where you want high model strength to capture a style but low clip strength to avoid a certain keyword in the captions. 0 denoising strength and get amazing results We would like to show you a description here but the site won’t allow us. 8> Red head” Oh, another Lora tip to throw on the bonfire: Since everything is a mix of a mix of a mix… watch out for Lora ‘overfitting’ that makes your images look like deep fried memes. Any advice or resource regarding the topic would be greatly appreciated! Welcome to the unofficial ComfyUI subreddit. And wait for ControlNet reference 👀 You can do img2img at 1. 0> which is calling it for that particular image with it's standard strength applied. Welcome to the unofficial ComfyUI subreddit. 97 votes, 17 comments. Works well, but stretches my RAM to the absolute limit. In A1111 they are placed in models/LoRA and called like this: <lora:loraname:0. I’m starting to dream in prompts. To facilitate the listing, you could start to type "<lora:" and then a bunch of lora appears to choose from and filter the request the more you type the name of the lora. I find it starts to look weird if you have more than three LoRA at the same time. Generate a set of "sample image" for commonly used models, LoRA etc. The output from the latter is a model with all the LoRAs included, which can then route into your KSampler Im quite new to ComfyUI. 0, again at 1. Maybe try putting everything except the lora trigger word in ( prompt here:0. 4, it renders the co Welcome to the unofficial ComfyUI subreddit. To prevent the application of Lora that is not used in the prompt, you need to directly connect the model that does not have Lora applied. py. When I use this LORA it always messes up my image. It does work if connected with lcm lora, but the images are too sharp where it shouldn't be (burnt), and not sharp enough where it should be. Some prompts which work great without the Lora produce terrible results. Try changing that or use a lora stacker that can allow lora/clip weight. “I don’t even see the prompts anymore. I feel like it works better if I put it in the prompt with <lora:name-of-LCM-lora-file:0. Or just skip the lora download python code and just upload the lora manually to the loras folder. 01 at around 10k iterations. 0 and can't comment on how well it will work with various fine tunes. What is your Lora strength in comfy sdxl? My Lora doesn’t appear in the images at 1. 25. you can by using Prompt S/R, where one lora will be replaced by the next. 0 Scheduler settings: CFG Scale: 1. LoRA: Hyper SD 1. And also, value the generations with the same LoRA strength from 1 to 5 according to how well the concept is represented. Most LoRAs also need one or more keywords to trigger. 5 and 0. Strength of the lora applied on the CLIP model vs the main MODEL. If the denoising strength must be brought up to generate something interesting, controlnet can help to retain composition. io/ComfyUI_examples/lora/ after the protest of Reddit For a slightly better UX, try a node called CR Load LoRA from Comfyroll Custom Nodes. So on X type select Prompt S/R, on X values type the name of your 1st lora, 2nd lora, 3rd lora etc. In the Lora Loader, I set the strength to "1" essentially turning it "on" In the Prompt, I'm calling the Lora <lora:whatever:1. If you have a set model + Lora stack you want to save and reuse, you can use the Save Checkpoint node at the output of the model+lora stack merge to reuse it as a base model in the future. I found I can send the clip to negative text encode …. Before clicking the Queue Prompt, be sure that the LoRA in the LoRA Stack is Switched ON and you have selected your desired LoRA. decreasing the lora strength removing negative prompts decreasing/increasing steps messing with clip skip None of it worked and the outcome is always full of digital artifacts and is completely unusable. And a few Lora’s require a positive weight in the negative text encode. Choose a weight between 0. I tested all of them which are now accompanied with a ComfyUI workflow that will get you started in no time. Prompt: “<lora:skatirFace:0. If we've got LoRA loader nodes with actual sliders to set the strength value, I've not come across them yet. Edit: Thank you everyone especially u/VeryAngrySquirrel for mentioning Mikey Nodes ! The "Wildcard And Lora Syntax Processor" is exactly what I'm looking for! If I click on "Lora_name" literally nothing happens. 7>, <lora:transformerstyle:0. If I have a chain of Loras and I… This slider is the only setting you have access to in A1111. Dec 7, 2024 · From a user perspective, the delta (which I'm calling a ConDelta, for Conditioning Delta, or Concept Delta, if you prefer) can be used the same way a LoRA can, by loading it with a node and setting a strength (positive or negative). Hi everyone, I am looking for a way to train LoRA using ComfyUI. I was using it successfully for SD1. So my though is that you set the batch count to 3 for example and then you use a node that changes the weight for the lora on each bath. , so that I can either cut and paste their metadata into Automatic1111 or open the PNG in ComfyUI to recover the workflow. 8. 5> generates the same image as <lora:foobar:1>. I've trained a LoRA with two different photo sets/modes, and different trigger (unique trained) words to distinguish them, but was using A1111 (or Vlad) at the time, and never have tried it in ComfyUI yet. 0>, " if written in the -prompt without any other lora loading do its job ? in efficiency nodes if i load easynegative and give it a -1 weight dose it work like a -prompt imbed ? do i have to use the trigger word for loras i imbed like this "<lora:easynegative:1. Heres’s mine: I use a couple of custom nodes -LoRA Stacker (from the Efficiency Nodes set) along feeding into CR Apply LoRA Stack node (from the Comfyroll set). Please share your tips, tricks, and workflows for using this software to create your AI Lora usage is confusing in ComfyUI. If you have installed and used this node, your sensitive data, including browser passwords, credit card information, and browsing history, may have been compromised and sent to a Discord server via webhook. 8 might be beneficial if sharing on Civitai, as users often default to 1. I use SD Library Notes, and copy everything -- EVERYTHING!-- from the model card into a text file, and make sure to use Markdown formatting. (i don't need the plot just individual images so i can compare myself). Model + Lora 100% Model + Lora 75% Model + Lora 50% And then tweak around as necessary PS: Also works for controlnet with ConditioningAverage node, especially considering high strength controlnet in low resolution will look jagged sometimes in higher res output so lowering the effect in the hiresfix steps can mitigate the issue. 0 or 0. This is because the model's patch for Lora is applied regardless of the presence of the trigger word. I want to test some basic lora weight comparisons, like in WebUI where you do XYZ plot. 2 seconds, with TensorRT. Since adding endless lora nodes tends to mess the simplest workflow, I'm looking for a plugin with a lora stacker node. when I start a trainingsession and I don't see the downtrend in loss that I'm hoping for I abort the process to save time and retry with new values. I had some success using stuff like position/concept loras from SDXL in Pony, but celebs? Characters? Nope. To my knowledge, Combine and Average work almost the same, but combine averages the weights based on the prompts, and average can average the StabilityAI just release new ControlNet LoRA for SDXL so you can run these on your GPU without having to sell a kidney to buy a new one. - If you set all ControlNet strength to 0. In your prompt put your 1st lora. I can select the LoRA I want to use and then select Anythingv3 or Protogen 2. I have yet to see any switches allowing more than 2 options, which is the major limitation here. 2 and go to town. 75, which is used for a new txt2img generation of the same prompt at a standard 512 x 640 pixel size, using CFG of 5 and 25 steps with uni_pc_bh2 sampler, but this time adding the character LoRA for the woman featured (which I trained myself), and here I switch to Wyvern v8 In ComfyUI you use LoraLoader to use a Lora and it contains the strength parameter as well. Is there a node that lets me decide the strength schedule for a lora? Or can I simply turn a Lora off by putting it in the negative prompts? I have a node called "Lora Scheduler" that lets you adjust weights throughout the steps, but unfortunately I'm not sure which node pack it's in. If I set the strength high, and start step at a higher value like 0. (Same image takes 5. /r/StableDiffusion is back open after the So far the only lora I used was either in a1111, or lcm lora, now I made my own, but it doesn't seem to work. I'd say one image in each batch of 8 is a keeper. Base model "Model" and "Clip" outputs go to the respective "Model" and "Clip" inputs of the first Load Lora node. Please keep posted images SFW. 0>, " ? is there a comfyui discord server ? Put in the same information as you would with conditioning 1 and conditioning 2, but you can control the weight of the first conditioning (conditioning to) with the conditioning_to_strength variable. Most of the time when you inpaint or use Adetailer, you will want to reduce CFG and lora weight, sometimes prompt weight, because they will overcook the image at lower values than in txt2img. Please share your tips, tricks, and workflows for using this software to create your AI art. But what do I do with the model? The positive has a Lora loader. In Comfy UI, you don't need to use the trigger word (especially if it's only one for the entire LoRA), mess with the strength_model setting in the LoRA loader instead. An added benefit is if I train the LoRA with a 1. 0 / 1. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. the classipeint LoRA actually does a really great job of overlapping with that style, if you throw in artist names and aesthetic terms with a slightly lower LoRA strength. In A111, my sdxl Lora is perfect at :1. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. In most UIs adjusting the LoRA strength is only one number and setting the lora strength to 0. <lora:foobar:0. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. So just add 5/6/however many max loras you'll ever use, then turn them on/off as needed. 4x KSampler - ELI5: We would like to show you a description here but the site won’t allow us. It worked normally with the regular 1. 5 version stable diffusion, however when i tried using it with other models, not all worked. Belittling their efforts will get you banned. Lowering the strength of the trigger word doesn't fix this problem. Also, if this is new and exciting to you, feel free to post What does the LoRA strength clip function do? If the clip is the text or trigger word, isn't it the same to put (loratriggerword:1. 5 DreamBooths. 0+for stacked Lora so making the change in the weight of the lora can make huge different in image but with stacked Lora it becomes time-consuming and tiring process so I We would like to show you a description here but the site won’t allow us. I recommend the DPM samplers, but use your favorite. Not sure how to configure the Lora strengths in ComfyUI. Do you experience the same? Is the syntax for Lora strength changed in Forge? We would like to show you a description here but the site won’t allow us. 3 weight, it doesn't mean that the trigger word is only applied to the lora. 20K subscribers in the comfyui community. Apr 17, 2025 · Welcome to the unofficial/community-run ComfyUI subreddit. The image below is the workflow with LoRA Stack added and connected to the other nodes. 0 for all of the loaders you have chained in. X:X. All I see is (masterpiece) Blonde, 1girl Brunette, <lora:Redhead:0. Or do something even more simpler by just paste the link of the loras in the model download link and then just change the files to the different folders. Where do I want to change the number to make it stronger or weaker? In the Loader, or in the prompt? Both? Thanks. I'm starting to believe, it isn't on my end and the loras are just completely broken, but if anyone else could test them, that would be awesome. If I lower the strength, it loses the characteristics of the original character. github. The issue has been that Automatic1111 didn't support this initially, so people ended up trying to set-up work arounds. 5 with the following settings: LCM lora strength 1. - Lora strength_model 0. Because a LoRA places a layer in the currently selected checkpoint. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . I attached 2 images only inpainting and using the same lora, the white haired one is when i used a1111, the other is using comfyui (searge) . jupveyxniqkhmjdlvyxusnytleonclluduniwwkfgfuronqctjn