Replies: 4 comments
-
|
It would be more stable to have you control the algorithm to process one image at a time and produce a more consistent output. Currently, it supports reading multiple images at once in a single session using multiple threads, and the loading speed is quite fast. However, if you want to process different images in multiple batches at once, you should try deploying the model using VLLM. |
Beta Was this translation helpful? Give feedback.
-
|
@JamePeng thy, you say "it supports reading multiple images at once in a single session using multiple threads" do I get a separate description for each image? And do you have a simple example? VLLM maybe, but have windwows and i dont want docker because WLS and to much dependencies ^^ |
Beta Was this translation helpful? Give feedback.
-
|
Similar to the qwen3-vl example in the readme, given n images, ask it to describe these n images. |
Beta Was this translation helpful? Give feedback.
-
|
but it is only one answer for the whole image list, right? i need all separate ... |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey
i use the new chathandler "MTMDChatHandler" with the small qwen3.5-08b .
i want describtion of many images... but i want save one separate file same name as txt for each image.
the approach to feed lets say 8 images at once seems not the right way, because it to complicate to sepparate after and supress to long answers per image.
i tried to load the model 4 times each its own worker-process(core) but it is same speed...
Any ideas, maybe such small model is fast on cpu (8 cores)?
Beta Was this translation helpful? Give feedback.
All reactions