Skip to content

Conversation

@ZIC143
Copy link
Contributor

@ZIC143 ZIC143 commented Feb 7, 2026

按自己需求写的,纯Vibe,注意屎山代码

通过usage的auth_index去匹配auth_files的auth_index
如果无匹配,则使用usage的source去匹配ai-provider的api-key
若都无法匹配显示未知渠道,旧数据显示未知渠道
auth认证以该渠道名称显示,如antigravity,如果有多账号可展开查看账号用量
ai-provider渠道显示渠道名称,若无渠道名称则显示Base_url

Database增加:channel列

统计内容
请求次数 token 费用 成功率
输入 缓存 输出 思考 token数

image

Summary by Sourcery

添加按渠道的使用情况追踪,并通过新的渠道统计 API 和 UI 视图对外提供。

新功能:

  • 引入“渠道分析”页面,可在可选时间范围内按渠道分组展示用量、tokens、成本和成功率等统计信息。
  • 暴露新的 /api/channels 端点,用于按渠道和模型聚合使用记录,并为每个渠道计算 token 数量和预估成本。
  • 为 usage 记录增加 auth 索引和派生渠道标识的持久化存储,以支持基于渠道的报表。

增强:

  • 在同步过程中,通过查询 CLI 代理管理端点,将 auth_index 和 API key 来源解析为可读的渠道名称。
  • 改进迁移工具,使其能从本地 .env 文件中加载环境变量,并自动标记已经执行过的初始迁移。

构建:

  • 添加新的数据库迁移,在 usage_records 上存储 auth_indexchannel 列。
Original summary in English

Summary by Sourcery

Add per-channel usage tracking and expose it via a new channels statistics API and UI view.

New Features:

  • Introduce a channels analytics page that groups and displays usage, token, cost, and success-rate statistics per channel over a selectable time range.
  • Expose an /api/channels endpoint that aggregates usage records by channel and model, computing token counts and estimated costs for each channel.
  • Add persistence for auth index and derived channel identifiers on usage records to enable channel-based reporting.

Enhancements:

  • Resolve auth_index and API key sources into human-readable channel names by querying CLI proxy management endpoints during sync.
  • Improve migration tooling to load environment variables from local .env files and auto-mark already-applied initial migrations.

Build:

  • Add a new database migration to store auth_index and channel columns on usage_records.

通过usage的auth_index去匹配auth_files的auth_index
如果无匹配,则使用usage的source去匹配ai-provider的api-key
Database增加:channel列

统计内容
请求次数 token 费用 成功率
输入 缓存 输出 思考 token数

auth认证以该渠道名称显示,如antigravity,如果有多账号可展开查看账号用量
ai-provider渠道显示渠道名称,若无渠道名称则显示Base_url
@vercel
Copy link

vercel bot commented Feb 7, 2026

@Zic-Wang is attempting to deploy a commit to the sxjeru's projects Team on Vercel.

A member of the Team first needs to authorize it.

@sourcery-ai
Copy link

sourcery-ai bot commented Feb 7, 2026

Reviewer's Guide

通过使用基于 auth/API-key 的 channel 名称丰富 usage 记录,并添加数据库列和迁移,实现按 channel 的用量追踪与可视化;同时暴露新的 /api/channels 聚合接口,以及用于查看 channel 统计与成本的新 Channels UI 页面和侧边栏入口。

将 usage 记录与 channel 解析同步的时序图

sequenceDiagram
  actor Admin
  participant Browser
  participant NextServer
  participant SyncRoute as performSync
  participant CLIProxy as CLIProxyAPI
  participant DB

  Admin->>Browser: Trigger sync request
  Browser->>NextServer: POST /api/sync
  NextServer->>SyncRoute: performSync(request)
  SyncRoute->>SyncRoute: isAuthorized
  SyncRoute-->>NextServer: unauthorized (if invalid)
  NextServer-->>Browser: 401 (if invalid)

  rect rgb(40,40,60)
    SyncRoute->>CLIProxy: fetchAuthIndexMapping
    CLIProxy-->>SyncRoute: auth_index to channel_name mapping
    SyncRoute->>CLIProxy: fetchApiKeyChannelMapping
    CLIProxy-->>SyncRoute: api_key to channel_name mapping
  end

  SyncRoute->>SyncRoute: toUsageRecords(payload, pulledAt, authMap, apiKeyMap)
  SyncRoute->>DB: INSERT usage_records (including authIndex, channel)
  DB-->>SyncRoute: insert result
  SyncRoute-->>NextServer: JSON { status: ok, inserted: n }
  NextServer-->>Browser: 200 OK
Loading

在 Channels 页面加载 channel 统计数据的时序图

sequenceDiagram
  actor User
  participant Browser as ChannelsPage
  participant NextServer
  participant ChannelsAPI as GET_api_channels
  participant DB

  User->>Browser: Navigate to /channels
  Browser->>ChannelsAPI: GET /api/channels?days=n
  ChannelsAPI->>DB: SELECT aggregated usage_records by channel
  DB-->>ChannelsAPI: ChannelAggRow list
  ChannelsAPI->>DB: SELECT aggregated usage_records by channel, model
  DB-->>ChannelsAPI: ChannelModelAggRow list
  ChannelsAPI->>DB: SELECT * FROM model_prices
  DB-->>ChannelsAPI: PriceRow list
  ChannelsAPI->>ChannelsAPI: estimateCost per channel
  ChannelsAPI-->>Browser: JSON { channels, days }
  Browser->>Browser: groupChannels, aggregateStats
  Browser-->>User: Render grouped channel statistics and costs
Loading

usageRecords 与 channel 追踪的实体关系图

erDiagram
  usageRecords {
    uuid id PK
    timestamptz occurredAt
    timestamptz syncedAt
    text route
    text model
    text authIndex
    text channel
    integer totalTokens
    integer inputTokens
    integer outputTokens
    integer reasoningTokens
    integer cachedTokens
    boolean isError
  }

  modelPrices {
    serial id PK
    text model
    numeric inputPricePer1M
    numeric cachedInputPricePer1M
    numeric outputPricePer1M
  }

  usageRecords }o--o{ modelPrices : model_price_for_model
Loading

usage 解析与 channel 聚合的类图

classDiagram
  class UsageTokens {
    +number inputTokens
    +number cachedTokens
    +number outputTokens
    +number reasoningTokens
    +number totalTokens
  }

  class UsageDetail {
    +string timestamp
    +string source
    +string auth_index
    +UsageTokens tokens
    +boolean failed
    +number cached
  }

  class ApiParsed {
    +Record~string,UsageDetail[]~ details
    +string api
  }

  class UsageResponse {
    +Record~string,ApiParsed~ apis
  }

  class UsageRecordInsert {
    +Date occurredAt
    +Date syncedAt
    +string route
    +string model
    +string authIndex
    +string channel
    +number totalTokens
    +number inputTokens
    +number outputTokens
    +number reasoningTokens
    +number cachedTokens
    +boolean isError
  }

  class toUsageRecords {
    +UsageRecordInsert[] toUsageRecords(UsageResponse payload, Date pulledAt, Map~string,string~ authMap, Map~string,string~ apiKeyMap)
  }

  class ChannelStat {
    +string channel
    +number requests
    +number totalTokens
    +number inputTokens
    +number outputTokens
    +number reasoningTokens
    +number cachedTokens
    +number errorCount
    +number cost
  }

  class ChannelGroup {
    +string name
    +string type
    +ChannelStat[] channels
    +number requests
    +number totalTokens
    +number inputTokens
    +number outputTokens
    +number reasoningTokens
    +number cachedTokens
    +number errorCount
    +number cost
  }

  class ChannelsPage {
    +ChannelsPage()
  }

  class groupChannelsFn {
    +ChannelGroup[] groupChannels(ChannelStat[] channels)
  }

  class aggregateStatsFn {
    +ChannelStat aggregateStats(ChannelStat[] channels)
  }

  class fmtRateFn {
    +string fmtRate(number requests, number errorCount)
  }

  class rateColorFn {
    +string rateColor(number requests, number errorCount)
  }

  UsageResponse --> ApiParsed : contains
  ApiParsed --> UsageDetail : contains
  UsageDetail --> UsageTokens : contains
  toUsageRecords --> UsageResponse : parses
  toUsageRecords --> UsageRecordInsert : produces
  ChannelGroup --> ChannelStat : aggregates
  ChannelsPage --> ChannelStat : displays
  ChannelsPage --> ChannelGroup : displays
  groupChannelsFn --> ChannelStat : input
  groupChannelsFn --> ChannelGroup : output
  aggregateStatsFn --> ChannelStat : input_output
  fmtRateFn --> ChannelStat : uses
  rateColorFn --> ChannelStat : uses
Loading

File-Level Changes

Change Details Files
基于 auth_index 和 API-key 元数据在同步 usage 时解析并补充 channel 信息,并将其持久化到 usage 记录中。
  • 添加 helper,从 CLI proxy 的 /auth-files 端点获取 auth_index 到 channel_name 的映射,并加入可靠的错误处理。
  • 添加 helper,从多个 CLI proxy 管理端点获取 API-key 到 channel_name 的映射,并对其异构的 JSON 结构进行归一化。
  • 更新同步流程以并行获取这两类映射,并将它们传入 usage 记录转换逻辑。
  • 更改 auth_index 的解析方式,将其视为十六进制字符串而非数值;基于 auth_index 和 source 计算可展示的 channel 名称(带回退逻辑),并将 auth_index 与 channel 一并写入 usage 记录。
app/api/sync/route.ts
lib/usage.ts
扩展数据库 schema 和迁移流程以支持新的 channel 字段,并处理部分已应用的早期迁移。
  • 在 usage_records schema 中添加可为空的 text 类型字段 auth_index 和 channel,并创建对应的 SQL 迁移快照。
  • 增强迁移脚本,使其在作为独立 Node 脚本运行时,可以从 .env.local/.env 加载环境变量。
  • 让迁移脚本在数据库结构已体现初始 0000 和 0001 迁移的情况下,自动将它们标记为已应用,从而避免重复执行 schema 操作。
lib/db/schema.ts
drizzle/0002_add_channel_tracking.sql
drizzle/meta/0002_snapshot.json
drizzle/meta/_journal.json
scripts/migrate.mjs
添加后端聚合 API,在可配置时间范围内按 channel 汇总 usage 和预估成本。
  • 实现 /api/channels GET 端点,按 channel(以及用于定价的 channel+model)聚合 usage_records,支持可选的 start/enddays 查询参数。
  • 规范化并验证日期范围输入,基于 occurredAt 构建 drizzle 的 where 子句,并计算响应中返回的有效天数跨度。
  • 使用 modelPrices 及现有定价 helper 计算各 channel 的成本,包括 cached 与 reasoning tokens 组件,并向前端返回结构化的 JSON 负载。
app/api/channels/route.ts
引入 Channels 分析 UI,展示每个 channel 的用量、成功率、tokens 拆分和成本。
  • 创建 Channels 页面,用于请求 /api/channels,支持时间范围选择、排序以及错误/加载状态,并聚合展示全局统计卡片。
  • 实现按 auth-based provider 与 API-key channel 对 channels 进行分组,并提供可展开的分组视图以展示按账号拆分的行。
  • 渲染 token 组成条形图及图例,并在桌面与移动端使用对齐的类表格布局,复用已有的数字/货币格式化工具。
  • 将 Channels 页面接入侧边栏导航,增加新的 icon 和标签用于 channel 统计。
app/channels/page.tsx
app/components/Sidebar.tsx

Tips and commands

Interacting with Sourcery

  • 触发新评审: 在 pull request 中评论 @sourcery-ai review
  • 继续讨论: 直接回复 Sourcery 的评审评论。
  • 从评审评论生成 GitHub issue: 在评审评论下回复,请 Sourcery 从该评论创建一个 issue。也可以直接回复 @sourcery-ai issue 从该评论生成 issue。
  • 生成 pull request 标题: 在 pull request 标题中任意位置写上 @sourcery-ai 即可随时生成标题。也可以在 pull request 中评论 @sourcery-ai title 以(重新)生成标题。
  • 生成 pull request 总结: 在 pull request 正文任意位置写上 @sourcery-ai summary,即可在相应位置生成 PR 总结。也可以在 pull request 中评论 @sourcery-ai summary 以(重新)生成总结。
  • 生成评审者指南: 在 pull request 中评论 @sourcery-ai guide,即可随时(重新)生成评审者指南。
  • 批量解决所有 Sourcery 评论: 在 pull request 中评论 @sourcery-ai resolve,将所有 Sourcery 评论标记为已解决。如果你已经处理完所有评论且不想再看到它们,这会很有用。
  • 清除所有 Sourcery 评审: 在 pull request 中评论 @sourcery-ai dismiss,以清除所有现有的 Sourcery 评审。尤其适用于你希望从一次全新的评审开始的情况——别忘了随后评论 @sourcery-ai review 触发新的评审!

Customizing Your Experience

访问你的 dashboard 可以:

  • 启用或禁用评审特性,例如 Sourcery 生成的 pull request 总结、评审者指南等。
  • 更改评审语言。
  • 添加、移除或编辑自定义评审指令。
  • 调整其他评审设置。

Getting Help

Original review guide in English

Reviewer's Guide

Implements per-channel usage tracking and visualization by enriching usage records with auth/api-key based channel names, adding database columns and migrations, exposing a new /api/channels aggregation endpoint, and a new Channels UI page plus sidebar entry for inspecting channel statistics and costs.

Sequence diagram for syncing usage records with channel resolution

sequenceDiagram
  actor Admin
  participant Browser
  participant NextServer
  participant SyncRoute as performSync
  participant CLIProxy as CLIProxyAPI
  participant DB

  Admin->>Browser: Trigger sync request
  Browser->>NextServer: POST /api/sync
  NextServer->>SyncRoute: performSync(request)
  SyncRoute->>SyncRoute: isAuthorized
  SyncRoute-->>NextServer: unauthorized (if invalid)
  NextServer-->>Browser: 401 (if invalid)

  rect rgb(40,40,60)
    SyncRoute->>CLIProxy: fetchAuthIndexMapping
    CLIProxy-->>SyncRoute: auth_index to channel_name mapping
    SyncRoute->>CLIProxy: fetchApiKeyChannelMapping
    CLIProxy-->>SyncRoute: api_key to channel_name mapping
  end

  SyncRoute->>SyncRoute: toUsageRecords(payload, pulledAt, authMap, apiKeyMap)
  SyncRoute->>DB: INSERT usage_records (including authIndex, channel)
  DB-->>SyncRoute: insert result
  SyncRoute-->>NextServer: JSON { status: ok, inserted: n }
  NextServer-->>Browser: 200 OK
Loading

Sequence diagram for loading channel statistics in Channels page

sequenceDiagram
  actor User
  participant Browser as ChannelsPage
  participant NextServer
  participant ChannelsAPI as GET_api_channels
  participant DB

  User->>Browser: Navigate to /channels
  Browser->>ChannelsAPI: GET /api/channels?days=n
  ChannelsAPI->>DB: SELECT aggregated usage_records by channel
  DB-->>ChannelsAPI: ChannelAggRow list
  ChannelsAPI->>DB: SELECT aggregated usage_records by channel, model
  DB-->>ChannelsAPI: ChannelModelAggRow list
  ChannelsAPI->>DB: SELECT * FROM model_prices
  DB-->>ChannelsAPI: PriceRow list
  ChannelsAPI->>ChannelsAPI: estimateCost per channel
  ChannelsAPI-->>Browser: JSON { channels, days }
  Browser->>Browser: groupChannels, aggregateStats
  Browser-->>User: Render grouped channel statistics and costs
Loading

Entity relationship diagram for usageRecords and channel tracking

erDiagram
  usageRecords {
    uuid id PK
    timestamptz occurredAt
    timestamptz syncedAt
    text route
    text model
    text authIndex
    text channel
    integer totalTokens
    integer inputTokens
    integer outputTokens
    integer reasoningTokens
    integer cachedTokens
    boolean isError
  }

  modelPrices {
    serial id PK
    text model
    numeric inputPricePer1M
    numeric cachedInputPricePer1M
    numeric outputPricePer1M
  }

  usageRecords }o--o{ modelPrices : model_price_for_model
Loading

Class diagram for usage parsing and channel aggregation

classDiagram
  class UsageTokens {
    +number inputTokens
    +number cachedTokens
    +number outputTokens
    +number reasoningTokens
    +number totalTokens
  }

  class UsageDetail {
    +string timestamp
    +string source
    +string auth_index
    +UsageTokens tokens
    +boolean failed
    +number cached
  }

  class ApiParsed {
    +Record~string,UsageDetail[]~ details
    +string api
  }

  class UsageResponse {
    +Record~string,ApiParsed~ apis
  }

  class UsageRecordInsert {
    +Date occurredAt
    +Date syncedAt
    +string route
    +string model
    +string authIndex
    +string channel
    +number totalTokens
    +number inputTokens
    +number outputTokens
    +number reasoningTokens
    +number cachedTokens
    +boolean isError
  }

  class toUsageRecords {
    +UsageRecordInsert[] toUsageRecords(UsageResponse payload, Date pulledAt, Map~string,string~ authMap, Map~string,string~ apiKeyMap)
  }

  class ChannelStat {
    +string channel
    +number requests
    +number totalTokens
    +number inputTokens
    +number outputTokens
    +number reasoningTokens
    +number cachedTokens
    +number errorCount
    +number cost
  }

  class ChannelGroup {
    +string name
    +string type
    +ChannelStat[] channels
    +number requests
    +number totalTokens
    +number inputTokens
    +number outputTokens
    +number reasoningTokens
    +number cachedTokens
    +number errorCount
    +number cost
  }

  class ChannelsPage {
    +ChannelsPage()
  }

  class groupChannelsFn {
    +ChannelGroup[] groupChannels(ChannelStat[] channels)
  }

  class aggregateStatsFn {
    +ChannelStat aggregateStats(ChannelStat[] channels)
  }

  class fmtRateFn {
    +string fmtRate(number requests, number errorCount)
  }

  class rateColorFn {
    +string rateColor(number requests, number errorCount)
  }

  UsageResponse --> ApiParsed : contains
  ApiParsed --> UsageDetail : contains
  UsageDetail --> UsageTokens : contains
  toUsageRecords --> UsageResponse : parses
  toUsageRecords --> UsageRecordInsert : produces
  ChannelGroup --> ChannelStat : aggregates
  ChannelsPage --> ChannelStat : displays
  ChannelsPage --> ChannelGroup : displays
  groupChannelsFn --> ChannelStat : input
  groupChannelsFn --> ChannelGroup : output
  aggregateStatsFn --> ChannelStat : input_output
  fmtRateFn --> ChannelStat : uses
  rateColorFn --> ChannelStat : uses
Loading

File-Level Changes

Change Details Files
Enrich usage sync with channel resolution based on auth_index and API-key metadata and persist it on usage records.
  • Add helper to fetch auth_index-to-channel-name mapping from CLI proxy /auth-files endpoint with robust error handling.
  • Add helper to fetch API-key-to-channel-name mapping from multiple CLI proxy management endpoints and normalize their heterogeneous JSON shapes.
  • Update sync flow to retrieve both mappings in parallel and pass them into usage record transformation.
  • Change auth_index parsing to treat it as a hex string instead of numeric and compute a display channel name from auth_index and source with fallbacks, storing both auth_index and channel into usage records.
app/api/sync/route.ts
lib/usage.ts
Extend database schema and migration flow to support new channel fields and handle partially applied earlier migrations.
  • Add auth_index and channel nullable text columns to usage_records schema and create a corresponding SQL migration snapshot.
  • Enhance migration script to load environment variables from .env.local/.env when run as a standalone Node script.
  • Teach migration script to auto-mark initial 0000 and 0001 migrations as applied when the database structure already reflects them, preventing duplicate schema operations.
lib/db/schema.ts
drizzle/0002_add_channel_tracking.sql
drizzle/meta/0002_snapshot.json
drizzle/meta/_journal.json
scripts/migrate.mjs
Add backend aggregation API that summarizes usage and estimated cost per channel over a configurable date range.
  • Implement /api/channels GET endpoint that aggregates usage_records by channel (and by channel+model for pricing) with optional start/end or days query parameters.
  • Normalize and validate date range input, building a drizzle where-clause for occurredAt and computing effective day span for the response.
  • Use modelPrices with existing pricing helpers to compute per-channel cost, including cached and reasoning token components, and return a structured JSON payload for the frontend.
app/api/channels/route.ts
Introduce a Channels analytics UI that surfaces per-channel usage, success rate, token breakdown, and costs.
  • Create a Channels page that fetches /api/channels, supports time-range selection, sorting, and error/loading states, and aggregates global stats cards.
  • Implement grouping of channels into auth-based providers vs API-key channels with expandable groups showing per-account breakdown rows.
  • Render a token composition bar and legend, plus aligned table-like layout across desktop and mobile, using existing number/currency formatters.
  • Wire Channels page into sidebar navigation with a new icon and label for channel statistics.
app/channels/page.tsx
app/components/Sidebar.tsx

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我在这里给出了一些整体性的反馈:

  • scripts/migrate.mjs 里的自定义 .env 加载器实现得非常朴素(不处理引号、行内注释,或包含 = 的值),可能会错误解析合法的环境变量文件;建议使用 dotenv,或者复用 Next 的环境变量加载逻辑,而不是手写一个解析器。
  • channels API 在大致相同的时间范围内,对 usage_records 运行了两次单独的聚合查询;你可以通过只做一次按 (channel, model) 分组的聚合来减少负载,然后从该结果中推导每个 channel 的汇总,而不是扫描两次表。
给 AI Agent 的提示
Please address the comments from this code review:

## Overall Comments
- The custom .env loader in `scripts/migrate.mjs` is very naive (no quote handling, inline comments, or values containing `=`) and may mis-parse valid environment files; consider using `dotenv` or reusing Next’s env loading logic instead of hand-rolling a parser.
- The channels API runs two separate aggregation queries over `usage_records` for roughly the same time range; you could reduce load by doing a single aggregation grouped by `(channel, model)` and deriving per-channel totals from that result rather than scanning the table twice.

Sourcery 对开源项目免费——如果你觉得我们的评审对你有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点击 👍 或 👎,我会根据你的反馈改进后续的评审。
Original comment in English

Hey - I've left some high level feedback:

  • The custom .env loader in scripts/migrate.mjs is very naive (no quote handling, inline comments, or values containing =) and may mis-parse valid environment files; consider using dotenv or reusing Next’s env loading logic instead of hand-rolling a parser.
  • The channels API runs two separate aggregation queries over usage_records for roughly the same time range; you could reduce load by doing a single aggregation grouped by (channel, model) and deriving per-channel totals from that result rather than scanning the table twice.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The custom .env loader in `scripts/migrate.mjs` is very naive (no quote handling, inline comments, or values containing `=`) and may mis-parse valid environment files; consider using `dotenv` or reusing Next’s env loading logic instead of hand-rolling a parser.
- The channels API runs two separate aggregation queries over `usage_records` for roughly the same time range; you could reduce load by doing a single aggregation grouped by `(channel, model)` and deriving per-channel totals from that result rather than scanning the table twice.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants