Conversation
| from vastai import Worker, WorkerConfig, HandlerConfig, LogActionConfig, BenchmarkConfig | ||
|
|
||
| worker_config = WorkerConfig( | ||
| model_server_url="http://127.0.0.1", |
There was a problem hiding this comment.
Probably don't even need to specify this, since we default to this value.
| """Transform the incoming client payload before it reaches the backend.""" | ||
| # The client sends {"name": "World"}, but our backend ignores the body. | ||
| # We could validate, reshape, or enrich the payload here. | ||
| return payload |
There was a problem hiding this comment.
Maybe actually having some code here, just something simple, instead of saying its possible, would be better.
| ```bash | ||
| #!/bin/bash | ||
| cd /workspace | ||
| git clone <YOUR_REPO_URL> app # e.g. https://github.com/youruser/serverless-hello-world.git | ||
| cd app | ||
| pip install -r requirements.txt | ||
| python model_backend.py & | ||
| python worker.py |
There was a problem hiding this comment.
Good. But we need to show them to pull in the start_server.sh script from the pyworker repo, which does a bunch of helpful stuff.
bootstrap_script=https://raw.githubusercontent.com/vast-ai/pyworker/refs/heads/main/start_server.sh;
curl -L "$bootstrap_script" | bash;
^ This instead of python worker.py. It ultimately runs python worker.py. But this script will automatically pull in the repo, which we already specified...
This is tricky because the correct way to do things is to have two repos, one for your model backend and one for your pyworker. You then pull in the pyworker repo with the script and specify it with PYWORKER_REPO flag. But, this approach of putting both the code and the pyworker together is much better. This is what we are going for with deployments. However, it's not currently supported.
I actually think the best thing to do is to update our flow to allow this, since it is much cleaner and we are moving into the "one repo" pattern anyway.
2 example docs