Branch
other (specify below)
Branch (if other)
b3nw/LLM-API-Key-Proxy
Version / Tag / Commit
build-20260409-1-b6ba75f
Provider(s) Affected
gemini_cli
Deployment Method
Docker
Operating System
Ubuntu 22
Python Version (if running from source)
No response
Bug Description
Hello,
It would seem this configuration is not applied by the LLM proxy; I can see requests getting very close to zero despite this in my .env
CUSTOM_CAP_COOLDOWN_GEMINI_CLI_T2_PRO="offset:3600"
CUSTOM_CAP_GEMINI_CLI_T2_PRO=225
CUSTOM_CAP_GEMINI_CLI_T2_25_FLASH=1425
CUSTOM_CAP_GEMINI_CLI_T2_3_FLASH=1425
same issue if i try using %
CUSTOM_CAP_GEMINI_CLI_T2_PRO="90%"
CUSTOM_CAP_GEMINI_CLI_T2_25_FLASH="90%"
CUSTOM_CAP_GEMINI_CLI_T2_3_FLASH="90%"
The top of my .env file is this :
LOG_LEVEL=info
ENABLE_REQUEST_LOGGING=false
OAUTH_REFRESH_INTERVAL=600
GLOBAL_TIMEOUT=300
SKIP_OAUTH_INIT_CHECK=true
ROTATION_TOLERANCE=2.5
Steps to Reproduce
- Use gemini_cli provider
- Use most of my requests
- See the requests go down to 0/1500
Expected Behavior
I'd expect the requests to go no lower than around 75 remaining requests.
Actual Behavior
The requests go all the way down to zero.
Error Logs / Messages
Pre-submission Checklist
Branch
other (specify below)
Branch (if other)
b3nw/LLM-API-Key-Proxy
Version / Tag / Commit
build-20260409-1-b6ba75f
Provider(s) Affected
gemini_cli
Deployment Method
Docker
Operating System
Ubuntu 22
Python Version (if running from source)
No response
Bug Description
Hello,
It would seem this configuration is not applied by the LLM proxy; I can see requests getting very close to zero despite this in my .env
same issue if i try using %
The top of my .env file is this :
Steps to Reproduce
Expected Behavior
I'd expect the requests to go no lower than around 75 remaining requests.
Actual Behavior
The requests go all the way down to zero.
Error Logs / Messages
Pre-submission Checklist