Skip to content

wip: Raw structure of improvements to gun that run in production#690

Open
speeddragon wants to merge 49 commits intoneo/edgefrom
impr/gun
Open

wip: Raw structure of improvements to gun that run in production#690
speeddragon wants to merge 49 commits intoneo/edgefrom
impr/gun

Conversation

@speeddragon
Copy link
Collaborator

@speeddragon speeddragon commented Feb 26, 2026

In this PR, there are a few improvements to how gun handles the connection.

  • Connections are fetched using ETS instead of relying on the gen_server.
  • Support for multiple connections for the same peer.
  • Separate them from read (GET/HEAD) and write (POST/PUT).
    • This was most useful when handling S3 read/write, not sure if very useful for this case.
    • This can be configured in a config file by setting conn_pool_read_size and conn_pool_write_size.
  • Improve and fix metrics for both gun and httpc clients.
    • Added category to separate duration metrics by endpoint (/tx, /chunk, etc).

TODO

  • Improve code structure.
  • Review TODO:

@speeddragon speeddragon force-pushed the impr/gun branch 2 times, most recently from acd6a2f to bed3351 Compare February 27, 2026 00:14
@speeddragon speeddragon changed the base branch from edge to neo/edge February 27, 2026 00:14
Comment on lines +22 to +25
setup_conn(Opts) ->
ConnPoolReadSize = hb_maps:get(conn_pool_read_size,Opts, ?DEFAULT_CONN_POOL_READ_SIZE),
ConnPoolWriteSize = hb_maps:get(conn_pool_write_size,Opts, ?DEFAULT_CONN_POOL_WRITE_SIZE),
persistent_term:put(?CONN_TERM, {ConnPoolReadSize, ConnPoolWriteSize}).
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to find a better way to define this, but I couldn't.

@samcamwilliams samcamwilliams force-pushed the impr/gun branch 6 times, most recently from c7d1484 to 418d802 Compare March 3, 2026 21:59
samcamwilliams and others added 20 commits March 6, 2026 15:53
Now the dev_bundler server manages the dispatch workers directly
Continues the refactoring of the dev_bundler modules and implements functionality to allow unbundled items and unfinished bundles to be recovered incrementally rather than requiring that all data be loaded first before any of it is processed
also force an lmdb flush to disk to ensure recovered items are correctly persisted
…ed regression within this PR)

Now we do 2 converts before posting a TX:
- one to build a Header-only TX that we use for posting. This makes the posting process much quicker
- one to biuld a full-data TX tha we use for caching and recovery

We shouldn't need to cache the full-data TX, so a future optmization can address that. We'll just have to be careful about rebuilding the data payload from the cache data items to ensure the reult is the same as when the original TX was posted
This breaks the HB loose coupling but for large bundles the TX posts in seconds instead of minutes
impr: manifest resolution and compatibility fixes
impr: both from and to can be negative offsest from tip (copycat)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants