Cloud GPU via Interstice.cloud #532
Replies: 6 comments 4 replies
-
Thanks, this is some hard and nice work 👍 any way to refresh the remaining tokens? What do you expect the price to be in the future? |
Beta Was this translation helpful? Give feedback.
-
Hello ! Thank you for this wonderful work. We're also looking to have trainers on Krita AI to help creatives push beyond the limits of Midjourney. Looking forward to discussing this further. |
Beta Was this translation helpful? Give feedback.
-
Thanks for testing!
Currently tokens are limited to keep cost under some control, might extend it later. Still need to evaluate statistics to figure out pricing. I'll post an update once that's done and if it looks reasonable go forward (this may then take some time due to business/legal stuff though...) In the meantime, SDXL is now deployed too, version 1.16.1 is required to use it. |
Beta Was this translation helpful? Give feedback.
-
I would MUCH rather pay for your cloud than jump through all the hoops of Vast.ai. Plus you deserve the financial support. Question: how would the LoRas work - would we just upload them like Vast.ai or could we use what's on our local directory? |
Beta Was this translation helpful? Give feedback.
-
Time for some updates. About TokensGenerating images requires GPU compute time, and that costs money. The idea of Tokens is to give a rough estimate on how much compute is required, so people have an idea what a generation will cost. Ideally it shouldn't be too complicated. There are three factors which have a large impact:
Also SDXL is a bigger up-front and maintenance cost than SD1.5. The DataI did some initial tests and whipped up a formula that was used to compute Tokens. Now there's actual usage data, which allows to compare Tokens predicted versus actual compute time required. Initially the variance was quite high ( /-80% on average). After the latest updates it's down to /-30%, but it hasn't been live very long. ResolutionImage size is tricky. Other services usually offer some fixed choices (1MP, 2MP) with various aspect ratios and things are easy. But when working with a canvas, resolutions need to be very flexible. The gist is, large resolutions are really expensive. Non-linear progressionWhere a 2048x1024 image takes 18 seconds, 2048x2048 already takes 60 - twice the pixels, more than three times the cost. Double resolution again, 220 seconds. It quickly becomes very inefficient. The latest version now takes this into account. Maximum resolutionThe plugin's performance settings have a "Maximum resolution" setting, which limits the image size used for diffusion. If the canvas is bigger, some relatively cheap but decent upscaling is used to make it fit. This is especially helpful for people who aren't aware of how costly high resolutions are, and load fotos or illustrations at camera/print resolutions. In v1.17.2 the default maximum resolution is now 6MP (6 millionen pixels). This number can be tweaked in performance settings, but the maximum the service accepts is 8MP (and that's already very expensive). If you want diffusion at higher resolutions use the Upscaling workspace, it does not have a limit and uses tiling to scale better. FillActually what is the resolution of the image? It's not always clear. Inpainting (selections at 100% strength) for example will run an initial pass on a downscaled image with a lot of context, then upscale, but also crop to a smaller region around the selection. So there are multiple resolutions. Also because it's two passes, the number of steps that run is actually higher than what is configured. The latest changes take some of this into account, but it's perhaps still not close enough. The biggest outliers currently fall into this category. ConclusionsIt's quite difficult to find a good trade off between keeping it simple, but also making it fair! Examples
FormulaI'm not going to post it here as it may still change, but hey, it's open source. UIAlso in v1.17.2 the UI now shows you how much generation will cost when you hover over the button. Next
|
Beta Was this translation helpful? Give feedback.
-
If you would Add Autsimmix as a base model for the cloudservice |
Beta Was this translation helpful? Give feedback.
-
Interstice.cloud
In short: this is an online service which provides all the functionality needed by the Krita plugin, with the same results you'd get locally.
No installation and no GPU needed!
Why?
One of the goals of this project is to make generative AI accessible. Local Stable Diffusion requires a powerful GPU, and some time and technical skill to set it up. No matter what I try to make it easier, the ecosystem is so volatile and changes so fast, it will be a struggle for some time to come.
Of course there are lots of services out there already, but they are usually restrictive, closed, and lack the tools Krita brings. They also tend to be either very simple, or very technical.
Sustainability is also a reason. This project requires a lot of time, but makes no money. It will always be open source and local will always be an option. But if it can make some money by offering optional convenience, that would be great!
What about the exsting Runpod/Vast template?
It has several issues:
Work in progress
Currently this is an experiment, up for testing:
Feedback
If you're interested please give it a try. Some data is needed to figure out what it has to cost to cover expenses.
You will need an account on www.interstice.cloud (only requires a valid email), and version 1.16.0 of the plugin.
Beta Was this translation helpful? Give feedback.
All reactions