Skip to content

fix(pipecat-moss): fix compatibility with pipecat-ai 1.1.0#191

Open
rohanshrma222 wants to merge 7 commits intousemoss:mainfrom
rohanshrma222:updatepipecat-moss
Open

fix(pipecat-moss): fix compatibility with pipecat-ai 1.1.0#191
rohanshrma222 wants to merge 7 commits intousemoss:mainfrom
rohanshrma222:updatepipecat-moss

Conversation

@rohanshrma222
Copy link
Copy Markdown
Contributor

@rohanshrma222 rohanshrma222 commented Apr 30, 2026

Pull Request Checklist

Please ensure that your PR meets the following requirements:

  • I have read the CONTRIBUTING guide.
  • I have updated the documentation (if applicable).
  • My code follows the style guidelines of this project.
  • I have performed a self-review of my own code.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.

Description

This PR fixes a complete import failure in pipecat-moss when used with pipecat-ai 1.1.0 (the current latest release).

Pipecat 1.1.0 introduced two breaking API removals that moss_index_processor.py depended on:

  1. LLMMessagesFrame was removed — split into LLMMessagesAppendFrame, LLMMessagesUpdateFrame, and LLMMessagesTransformFrame. Replaced with LLMMessagesUpdateFrame which matches the original behaviour of carrying a full messages list to replace the current context.

  2. OpenAILLMContextFrame and its entire module were deleted — Pipecat unified all provider-specific context frame subclasses into the base LLMContextFrame, with provider formatting handled internally by LLM service adapters. Removed the import and simplified all isinstance checks to use LLMContextFrame directly.

The pyproject.toml version constraint is also tightened from >=0.0.99 to >=1.1.0 to make the supported Pipecat version explicit and prevent silent breakage for users.

Fixes #190

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature
  • Breaking change
  • Documentation update

Open in Devin Review

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 3 potential issues.

View 1 additional finding in Devin Review.

Open in Devin Review

Comment thread packages/pipecat-moss/pyproject.toml
Comment on lines +90 to 94
if isinstance(frame, LLMContextFrame):
context = frame.context
elif isinstance(frame, LLMMessagesFrame):
elif isinstance(frame, LLMMessagesUpdateFrame):
messages = frame.messages
context = LLMContext(messages)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Example apps monkey-patch references removed OpenAILLMContextFrame

The example apps at apps/pipecat-moss/ollama-local/ollama_bot.py:102-116 and apps/pipecat-moss/hume-ollama-local/hume_ollama_bot.py:101-115 monkey-patch MossIndexProcessor.process_frame to check for OpenAILLMContextFrame. With pipecat-ai bumped to >=1.1.0 (where OpenAILLMContextFrame may no longer exist), and the library code no longer handling that frame type, these example apps will likely break at runtime. These apps depend on pipecat-moss>=0.0.1 which would resolve to the new 0.1.0, pulling in pipecat-ai>=1.1.0. The monkey-patches should be updated or removed as part of this migration.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

await self.push_frame(LLMMessagesFrame(context.get_messages()))
elif isinstance(frame, (LLMContextFrame, OpenAILLMContextFrame)):
await self.push_frame(type(frame)(context=context)) # type: ignore[arg-type]
await self.push_frame(LLMMessagesUpdateFrame(messages=context.get_messages()))
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot Apr 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 LLMMessagesUpdateFrame fields beyond 'messages' are not propagated

In moss_index_processor.py:92-94, when an LLMMessagesUpdateFrame is received, only frame.messages is extracted. At line 130, a brand-new LLMMessagesUpdateFrame(messages=context.get_messages()) is pushed, potentially dropping any other fields the original frame carried. The old code did the same with LLMMessagesFrame, so this is pre-existing behavior, not a regression. However, if LLMMessagesUpdateFrame in pipecat 1.1.0 has additional fields (like run_llm seen on LLMMessagesAppendFrame in the example), those would be silently lost. Worth verifying against the pipecat 1.1.0 frame definitions.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@rohanshrma222
Copy link
Copy Markdown
Contributor Author

@yatharthk2

Please review this PR so that we can move quickly and merge this PR.

Copy link
Copy Markdown
Collaborator

@yatharthk2 yatharthk2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please update the changelog. Can be a very generic update message

Comment thread packages/pipecat-moss/pyproject.toml Outdated
[project]
name = "pipecat-moss"
version = "0.0.3"
version = "0.1.0"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please make it 0.0.4

@yatharthk2
Copy link
Copy Markdown
Collaborator

can you please provide quick loom video that runs the pipecat-moss/examples

@rohanshrma222
Copy link
Copy Markdown
Contributor Author

rohanshrma222 commented May 1, 2026

Fix Summary Table

File Area Original (broken) Fixed (now) Why
Import LLMRunFrame LLMMessagesAppendFrame LLMRunFrame removed in 1.1.0
Import RTVIConfig present removed deleted in 1.1.0
Import DailyParams present removed no Windows wheels
Import SileroVADAnalyzer present removed needs PyTorch
RTVIProcessor() config=RTVIConfig(config=[...]) no args RTVIConfig gone
RTVIObserver no params arg params=RTVIObserverParams() required in 1.1.0
System prompt no greeting instruction greeting instruction added LLM needs to know to greet first
on_client_connected messages.append + LLMRunFrame() LLMMessagesAppendFrame(run_llm=True) only user messages satisfy Gemini
Transport params daily + webrtc options webrtc only daily-python no Windows wheels
2026-05-01.10-08-38.mp4

devin-ai-integration[bot]

This comment was marked as resolved.

devin-ai-integration[bot]

This comment was marked as resolved.

@rohanshrma222
Copy link
Copy Markdown
Contributor Author

I made changes in the example demo also, it was following the older version

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 3 new potential issues.

View 5 additional findings in Devin Review.

Open in Devin Review

Comment thread packages/pipecat-moss/pyproject.toml
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Error path in process_frame swallows the original frame

In moss_index_processor.py:136-138, if an exception occurs during retrieval processing, the original frame (LLMContextFrame or LLMMessagesUpdateFrame) is consumed but never forwarded — only an ErrorFrame is pushed. This means the LLM will never receive the user's message, and the conversation will appear to hang. A more robust approach would be to fall back to pushing the original unaugmented frame so the LLM can still respond (just without retrieval context). This is a pre-existing issue not introduced by this PR, but worth noting since the surrounding code was touched.

(Refers to lines 136-138)

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Frame,
LLMContextFrame,
LLMMessagesFrame,
LLMMessagesUpdateFrame,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚩 Semantic difference between LLMMessagesUpdateFrame and old LLMMessagesFrame is unverified

The PR replaces LLMMessagesFrame with LLMMessagesUpdateFrame (moss_index_processor.py:20,92,130). If LLMMessagesUpdateFrame carries different semantics (e.g., a delta/update vs full message list), the logic of creating a new LLMContext from frame.messages and then pushing a new LLMMessagesUpdateFrame(messages=context.get_messages()) could be incorrect. The CHANGELOG describes this as a direct replacement of a removed class, and the uv.lock confirms pipecat-ai 1.1.0 is resolved, so this is likely safe — but it would be worth verifying against the pipecat 1.1.0 release notes or source.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, i have verified it by running a internal test for it.

@yatharthk2
Copy link
Copy Markdown
Collaborator

can you please address the devin comments ?

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ rohanshrma222
❌ Your Name


Your Name seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: pipecat-moss breaks with Pipecat 1.1.0 due to removed APIs

3 participants