-
Notifications
You must be signed in to change notification settings - Fork 113
Open
Description
When I got Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. I updated my config file to also include
{
"version": "1.0",
"base_url": "https://api.openai.com/v1",
"main_model": "gpt-5-nano-2025-08-07",
"cluster_model": "gpt-5-nano-2025-08-07",
"default_output": "docs",
"max_tokens": 32768,
"max_completion_tokens": 32768, <-- Here
"max_token_per_module": 36369,
"max_token_per_leaf_module": 16000,
"max_depth": 5
}But still failed with same error.
Here is my config:
$ codewiki config show
CodeWiki Configuration
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Credentials
API Key: sk-p...VOQA (in system keychain)
API Settings
Base URL: https://api.openai.com/v1
Main Model: gpt-5-nano-2025-08-07
Cluster Model: gpt-5-nano-2025-08-07
Fallback Model: glm-4p5
Output Settings
Default Output: docs
Token Settings
Max Tokens: 32768
Max Token/Module: 36369
Max Token/Leaf Module: 16000
Decomposition Settings
Max Depth: 5
Agent Instructions
Using defaults (no custom settings)
Configuration file: C:\Users\<user>\.codewiki\config.json
$ codewiki generate --create-branch --github-pages --verbose
[1/4] Validating configuration...
✓ Configuration valid
[2/4] Validating repository...
✓ Repository valid: <repo>
[17:46:43] Detected languages: JavaScript (13 files)
✓ Output directory: C:\Users\<user>\Desktop\Projects\<project>\docs
[3/4] Creating git branch...
✓ Created branch: docs/codewiki-20260302-174643
[4/4] Generating documentation...
[17:46:43] Max tokens: 32768
[17:46:43] Max token/module: 36369
[17:46:43] Max token/leaf module: 16000
[17:46:43] Max depth: 5
[00:00] Phase 1/5: Dependency Analysis
[00:00] Initializing dependency analyzer...
[00:00] Parsing source files...
[00:00] Found 22 leaf nodes
[00:00] Dependency Analysis complete (0.4s)
[00:00] Phase 2/5: Module Clustering
[00:00] Clustering modules with LLM...
✗ Module clustering failed: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'}}
✗ Traceback: Traceback (most recent call last):
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\codewiki\cli\adapters\doc_generator.py", line 216, in _run_backend_generation
module_tree = cluster_modules(leaf_nodes, components, backend_config)
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\codewiki\src\be\cluster_modules.py", line 62, in cluster_modules
response = call_llm(prompt, config, model=config.cluster_model)
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\codewiki\src\be\llm_services.py", line 80, in call_llm
response = client.chat.completions.create(
model=model,
...<2 lines>...
max_tokens=config.max_tokens
)
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\openai\_utils\_utils.py", line 286, in wrapper
return func(*args, **kwargs)
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1204, in create
return self._post(
~~~~~~~~~~^
"/chat/completions",
^^^^^^^^^^^^^^^^^^^^
...<47 lines>...
stream_cls=Stream[ChatCompletionChunk],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\openai\_base_client.py", line 1297, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\openai\_base_client.py", line 1070, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\codewiki\cli\commands\generate.py", line 368, in generate_command
job = generator.generate()
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\codewiki\cli\adapters\doc_generator.py", line 148, in generate
asyncio.run(self._run_backend_generation(backend_config))
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\asyncio\runners.py", line 204, in run
return runner.run(main)
~~~~~~~~~~^^^^^^
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\asyncio\runners.py", line 127, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\asyncio\base_events.py", line 719, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "C:\Users\<user>\AppData\Local\Programs\Python\Python314\Lib\site-packages\codewiki\cli\adapters\doc_generator.py", line 225, in _run_backend_generation
raise APIError(f"Module clustering failed: {e}")
codewiki.cli.utils.errors.APIError: Module clustering failed: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'}}Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels