Skip to content

Fix/code review bugs#53

Open
cosminacho wants to merge 10 commits intomainfrom
fix/code-review-bugs
Open

Fix/code review bugs#53
cosminacho wants to merge 10 commits intomainfrom
fix/code-review-bugs

Conversation

@cosminacho
Copy link
Copy Markdown
Collaborator

No description provided.

Bug fixes (critical):
- callbacks.py: add ABC to UiPathDynamicHeadersCallback so @AbstractMethod
  is enforced — previously the class could be instantiated without implementing
  get_headers(), silently deferring the error to runtime
- clients/anthropic/chat_models.py: make vendor_type a required field (no default)
  and seed api_config with VendorType.VERTEXAI as placeholder; the old default of
  VendorType.ANTHROPIC caused the model validator to immediately raise ValueError,
  making the class impossible to construct directly; also removed dead
  VendorType.ANTHROPIC / VendorType.AZURE branches from _anthropic_client and
  _async_anthropic_client that could never be reached, and unused imports
- clients/bedrock/utils.py: fix json.loads(kwargs.get('body', {})) — passing a
  dict as default caused TypeError since json.loads requires str/bytes; changed
  default to '{}' string; also corrected converse() and converse_stream()
  system parameter type annotation from str|None to list[dict[str,Any]]|None to
  match the list[dict] value langchain-aws actually passes
- clients/normalized/chat_models.py: add 'stream': True to request body in
  _uipath_stream and _uipath_astream — without it the server returns a regular
  non-streaming JSON response instead of SSE; fix fragile SSE prefix stripping
  from chunk.split('data:')[1] (splits on all occurrences) to
  chunk[len('data:'):].strip(); fix _generate_chunk to set text=content or ''
  instead of text=original_message (raw JSON wire data)
- factory.py: replace bare model_info['vendor'] key access in get_embedding_model
  with model_info.get('vendor') + explicit ValueError, matching the safe pattern
  already used in get_chat_model

Bug fixes (minor):
- base_client.py: correct return type annotations on _astream, _uipath_astream,
  and uipath_astream from AsyncIterator to AsyncGenerator (these are async
  generator functions that contain yield, not plain iterables)
- demo.py: replace unsafe eval(expression) with an AST-based arithmetic-only
  evaluator to prevent arbitrary code execution via LLM-generated input

Version bumps:
- packages/uipath_langchain_client: 1.5.10 -> 1.6.0
- src/uipath/llm_client:             1.5.10 -> 1.6.0
@cosminacho cosminacho force-pushed the fix/code-review-bugs branch from 5841d3f to dc78050 Compare April 1, 2026 22:18
Bug #5 — UiPathFireworksEmbeddings: self.model returned wrong model name
  Without an explicit field override, Pydantic kept two separate fields:
  FireworksEmbeddings.model (default 'nomic-ai/nomic-embed-text-v1.5') and
  UiPathBaseLLMClient.model_name (populated by the caller via alias 'model').
  self.model therefore always resolved to the stale FireworksEmbeddings default,
  silently ignoring whatever model name the caller passed.
  Fix: add model: str = Field(default='', alias='model_name') — the same override
  pattern used by every other embedding class — to collapse the two fields into
  one. Replace self.model with self.model_name in all four embed methods.

Bug #8 — UiPathEmbeddings: silent HTTP error swallowing + missing model field
  Two issues in embed_documents / aembed_documents:
  1. uipath_request / uipath_arequest were called without raise_status_error=True,
     so HTTP 4xx/5xx responses were silently accepted. The subsequent
     response.json()['data'] access would then raise a confusing KeyError or
     AttributeError rather than a clear HTTP error.
  2. The request body only contained {'input': texts}. The normalized API
     requires a 'model' field for routing, matching the pattern already used
     by the normalized chat model (_default_params includes 'model').
  Fix: pass raise_status_error=True and add 'model': self.model_name to the
  request body in both the sync and async paths.
…all generator functions

Every function annotated -> Iterator[T] or -> AsyncIterator[T] that contains
a yield statement is actually a generator function, not a plain function that
returns an iterator. The correct annotations are:

  sync  yield  ->  Generator[YieldType, SendType, ReturnType]
  async yield  ->  AsyncGenerator[YieldType, SendType]

The previous async fix (Iterator -> AsyncGenerator) was applied inconsistently:
the same issue existed on all sync counterparts and was left behind. This commit
fixes all remaining sites for full consistency:

base_client.py:
  uipath_stream       Iterator[str | bytes]                  -> Generator[str | bytes, None, None]
  _stream             Iterator[ChatGenerationChunk]          -> Generator[ChatGenerationChunk, None, None]
  _uipath_stream      Iterator[ChatGenerationChunk]          -> Generator[ChatGenerationChunk, None, None]

clients/normalized/chat_models.py:
  _uipath_stream      Iterator[ChatGenerationChunk]          -> Generator[ChatGenerationChunk, None, None]

clients/bedrock/utils.py:
  _stream_generator   Iterator[dict[str, Any]]               -> Generator[dict[str, Any], None, None]

Added Generator to collections.abc imports in all three files.
ruff --fix:
- demo.py: remove extraneous f-prefix from plain string literal (F541)
- base_client.py: remove unused AsyncIterator, Iterator imports (F401)
- bedrock/utils.py: remove unused Iterator import (F401); fix import block
  order (I001)
- normalized/chat_models.py: remove unused Iterator import (F401)

ruff format:
- demo.py: expand tuple literal to one-entry-per-line style
- bedrock/utils.py: wrap long _stream_generator signature to fit line-length

pyright (--pythonpath .venv/bin/python):
- bedrock/chat_models.py: UiPathChatBedrockConverse and UiPathChatBedrock mix
  UiPathBaseLLMClient (max_retries: int) with ChatBedrockConverse/ChatBedrock
  (max_retries: int | None); the type narrowing is intentional — our field
  enforces a non-None default of 0 and the httpx client accepts int | None.
  Suppress with # type: ignore[override] on both class definition lines, which
  is the correct pyright mechanism for intentional incompatible base-class field
  merges in multiple-inheritance mixins.
The normalized routing layer resolves the model from the URL/connection
context — the caller does not need to send it in the JSON body.

- normalized/chat_models.py: remove 'model': self.model_name from _default_params
- normalized/embeddings.py: remove 'model': self.model_name from both
  embed_documents and aembed_documents request bodies
- Restore VendorType.ANTHROPIC as default for both api_config.vendor_type
  and the vendor_type field
- Restore all four vendor branches (ANTHROPIC, AZURE, VERTEXAI, AWSBEDROCK)
  in _anthropic_client and _async_anthropic_client
- Restore original imports (Anthropic, AnthropicFoundry, AsyncAnthropic,
  AsyncAnthropicFoundry) that were removed in the earlier fix
- Remove the long explanatory comment on api_config
- Keep the original error messages verbatim
- clients/vertexai/chat_models.py: pass dict(self.uipath_sync_client.headers)
  and dict(self.uipath_async_client.headers) instead of the raw httpx.Headers
  objects, matching the pattern used in clients/anthropic/chat_models.py
- demo.py: import RoutingMode from the public uipath_langchain_client.settings
  API instead of the internal uipath.llm_client.settings.constants path;
  merge with the existing get_default_client_settings import
…message param

- clients/anthropic/chat_models.py: add VendorType.ANTHROPIC and
  VendorType.AZURE cases to setup_api_flavor_and_version validator so the
  class can be constructed with all supported vendor types without raising
- clients/normalized/chat_models.py: remove the now-unused original_message
  parameter from _generate_chunk and update both call sites in
  _uipath_stream and _uipath_astream
These vendors are not yet supported by UiPath's gateway, so no api_flavor
routing header should be emitted for them.
@cosminacho cosminacho deployed to LLMGW_SETTINGS April 1, 2026 23:25 — with GitHub Actions Active
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant