Skip to content

Refactor liquidity source to support multiple LSP nodes#792

Open
Camillarhi wants to merge 9 commits intolightningdevkit:mainfrom
Camillarhi:multi-lsp-support
Open

Refactor liquidity source to support multiple LSP nodes#792
Camillarhi wants to merge 9 commits intolightningdevkit:mainfrom
Camillarhi:multi-lsp-support

Conversation

@Camillarhi
Copy link
Copy Markdown
Contributor

@Camillarhi Camillarhi commented Feb 12, 2026

The current setup ties you to a single LSP per protocol via set_liquidity_source_lsps1 / set_liquidity_source_lsps2. This refactor replaces that with a unified Vec<LspNode> model where LSP nodes are added via add_lsp() and protocol support is discovered at runtime through LSPS0 list_protocols. Multi-LSP support has been requested previously in #529.

  • Deprecated set_liquidity_source_lsps1 / set_liquidity_source_lsps2 in favor of add_lsp()
  • Replaced the per-protocol LSPS1Client / LSPS2Client with global pending request maps keyed by LSPSRequestId
  • Added LSPS0 protocol discovery with event handling for ListProtocolsResponse
  • Background discovery task spawns on Node::start()
  • LSPS2 JIT channels now query all LSPS2-capable LSPs and automatically select the cheapest fee offer
  • Added request_channel_from_lsp() for explicit LSPS1 LSP selection
  • Updated event handling to use is_lsps_node() for multi-LSP counterparty checks

This sets the foundation for LSPS5 support currently being worked on in #729

@ldk-reviews-bot
Copy link
Copy Markdown

ldk-reviews-bot commented Feb 12, 2026

👋 Thanks for assigning @tnull as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@Camillarhi
Copy link
Copy Markdown
Contributor Author

@tnull Early draft up for review, would appreciate feedback on the general API direction

@tnull tnull self-requested a review February 24, 2026 16:28
Copy link
Copy Markdown
Collaborator

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Some initial questions..

Comment thread src/builder.rs Outdated
Comment thread src/liquidity.rs Outdated
supported_protocols: Mutex::new(None),
})
.collect(),
pending_lsps1_opening_params_requests: Mutex::new(HashMap::new()),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uh, no, I don't think we should just merge all into one. Especially given that we intend to add more logic on a per-spec basis, this will be will become even more confusing going forward. If anything, we should maybe start the refactoring by first moving the LSPS1/LSPS2 specific parts to src/liquidity/{lsps1,lsps2}.rs, or maybe even to client/service specific sub-modules like src/liquidity/{client,service}/{lsps1,lsps2}.rs.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, I will move the LSPS1/LSPS2 specific parts

Comment thread src/liquidity.rs Outdated
pub(crate) async fn lsps1_request_channel(
&self, lsp_balance_sat: u64, client_balance_sat: u64, channel_expiry_blocks: u32,
announce_channel: bool, refund_address: bitcoin::Address,
announce_channel: bool, refund_address: bitcoin::Address, node_id: &PublicKey,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I do wonder if it would make sense to create something like an struct ServiceProvider and move the API methods to there. Then, each registered LSP would have a corresponding ServiceProvider that exposes a bunch of public and internal APIs, which would make the modularization cleaner and would avoid having to give node_id everywhere?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, I will have a look and see

@Camillarhi Camillarhi force-pushed the multi-lsp-support branch 2 times, most recently from 3cda7f7 to 8bbf55a Compare March 26, 2026 00:46
@Camillarhi Camillarhi requested a review from tnull March 26, 2026 00:47
@Camillarhi
Copy link
Copy Markdown
Contributor Author

Hello @tnull, apologies for the delay. This is ready for another round of review

Copy link
Copy Markdown
Collaborator

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, already looks pretty good! Did a first higher-level pass, have yet to look into the details.

Comment thread src/liquidity.rs
@@ -1,1542 +0,0 @@
// This file is Copyright its original authors, visible in version control history.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to structure this PR in a way that (as far as possible) makes any code moves dedicated commits that can be picked up by git diff --color-moved --patience, as otherwise reviewing this in detail will be very hard.

Comment thread src/lib.rs Outdated
}
}

discovery_ls.discover_all_lsp_protocols().await;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be good to make this part of the background task above? Also, can we spawn the discovery tasks in parallel rather than doing them sequentially?

Comment thread src/lib.rs Outdated
}

/// Configures the [`Node`] instance to source inbound liquidity from the given LSP at runtime,
/// without specifying the exact protocol used (e.g., LSPS1 or LSPS2).
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can drop the remark regarding 'without specifying the exact protocol' here and elsewhere, as the API already communicates that due to being generic. I do however wonder if we'd want to move this method to an API-extension object similar to what we do for the payment types? I.e., retrieve the API object via Node::liquidity()?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I will remove the remark and move this to a Node::liquidity() API-extension

Comment thread src/liquidity/client/lsps1.rs Outdated
/// [`Bolt11Payment::receive_via_jit_channel`]: crate::payment::Bolt11Payment::receive_via_jit_channel
#[derive(Clone)]
#[cfg_attr(feature = "uniffi", derive(uniffi::Object))]
pub struct LSPS1Liquidity {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems this is currently not exposed in the API anymore?

Comment thread src/builder.rs Outdated
/// The given `token` will be used by the LSP to authenticate the user.
///
/// [bLIP-51 / LSPS1]: https://github.com/lightning/blips/blob/master/blip-0051.md
#[deprecated(note = "Use `add_lsp` instead")]
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think so far we've been fine with just breaking the APIs without deprecating them first. If we find a better API I'd be fine with just dropping the old ones to clean up.

@Camillarhi Camillarhi force-pushed the multi-lsp-support branch 2 times, most recently from 6061c6d to a560764 Compare March 30, 2026 16:32
@Camillarhi Camillarhi requested a review from tnull March 30, 2026 16:32
Copy link
Copy Markdown
Collaborator

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool. Docs CI is currently failing.

Would be great if you try once more to breakup the second commit into smaller chunks, but let me know if it's too cumbersome/impossible.

Feel free to mark ready for review then.

@Camillarhi Camillarhi marked this pull request as ready for review March 31, 2026 16:07
@Camillarhi Camillarhi requested a review from tnull March 31, 2026 16:45
@ldk-reviews-bot
Copy link
Copy Markdown

🔔 1st Reminder

Hey @tnull @valentinewallace! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

1 similar comment
@ldk-reviews-bot
Copy link
Copy Markdown

🔔 1st Reminder

Hey @tnull @valentinewallace! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link
Copy Markdown

🔔 2nd Reminder

Hey @tnull @valentinewallace! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

1 similar comment
@ldk-reviews-bot
Copy link
Copy Markdown

🔔 2nd Reminder

Hey @tnull @valentinewallace! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link
Copy Markdown

🔔 3rd Reminder

Hey @tnull @valentinewallace! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

1 similar comment
@ldk-reviews-bot
Copy link
Copy Markdown

🔔 3rd Reminder

Hey @tnull @valentinewallace! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link
Copy Markdown

🔔 4th Reminder

Hey @tnull @valentinewallace! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

1 similar comment
@ldk-reviews-bot
Copy link
Copy Markdown

🔔 4th Reminder

Hey @tnull @valentinewallace! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@Camillarhi Camillarhi force-pushed the multi-lsp-support branch 2 times, most recently from 22da16c to 2cda868 Compare April 9, 2026 15:16
@valentinewallace valentinewallace removed their request for review April 9, 2026 15:37
Copy link
Copy Markdown
Collaborator

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for splitting things out into prefactors! There's still a lot of things going on at the same time in the last commit, please break it up further if you see the opportunity. A few more comments, also some suggestions on how to reduce noisiness in the last commit.

Comment thread src/liquidity/mod.rs Outdated
trust_peer_0conf: bool,
) -> Result<(), Error> {
let liquidity_source =
self.liquidity_source.as_ref().ok_or(Error::LiquiditySourceUnavailable)?;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, wouldn't erroring out here mean that we can't add LSPs only at runtime, if we don't also add some during build time?

Comment thread src/liquidity/mod.rs Outdated
let mut lsp_nodes = liquidity_source.lsp_nodes.write().expect("lock");
if lsp_nodes.iter().any(|n| n.node_id == node_id) {
log_error!(self.logger, "LSP node {} is already added.", node_id);
return Err(Error::DuplicateLspNode);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should just silently continue rather than erroring here?

Comment thread src/event.rs Outdated
self.config.trusted_peers_0conf.contains(&counterparty_node_id);
let mut channel_override_config = None;
if let Some((lsp_node_id, _)) = self
if let Some(lsp2_node) = self
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Should be lsps2_node or similar?

);
let _ = tokio::time::timeout(
Duration::from_secs(LSPS_DISCOVERY_WAIT_TIMEOUT_SECS),
rx.wait_for(|done| *done),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, if there's nothing in-flight, we might just be waiting unnecessarily, no?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is true. The wait should only kick in when there's actually something in flight worth waiting for. LspNode.supported_protocols is None precisely while a node's protocol discovery hasn't completed or failed. So we can use that as the in-flight signal, if every node already has its protocols populated, there's nothing to wait for.

The previous shape selected all_lsps_for_protocol first and only waited if the result was empty. But that means if 1 of N LSPS2 capable nodes happened to respond quickly while the rest were still mid-discovery, we'd return a partial set and skip waiting which isn't ideal since we want to pick the best LSP from the full set. So I've inverted the structure to wait first if anything's pending, then select once.

Comment thread src/liquidity/client/lsps1.rs Outdated
Ok(response)
}

pub(crate) async fn handle_next_event(&self, event: LSPS1ClientEvent) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can make moving the event handling to the sub-modules 3 more prefactor commits (at least LSPS1 client, LSPS2 client, LSPS2 service, maybe also for the LSPS0 variant)?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have broken this down into prefactor commits for LSPS1 client, LSPS2 client and LSPS2 service

Comment thread src/liquidity/client/lsps1.rs Outdated
pub(crate) pending_lsps1_opening_params_requests:
Mutex<HashMap<LSPSRequestId, oneshot::Sender<LSPS1OpeningParamsResponse>>>,
pub(crate) pending_create_order_requests:
pub(crate) pending_lsps1_create_order_requests:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these field renames necessary? They seem to only increase the noise in the diff?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The field renames have been reverted

@Camillarhi Camillarhi force-pushed the multi-lsp-support branch 5 times, most recently from af25bef to bc2dde3 Compare April 30, 2026 02:47
@Camillarhi Camillarhi requested a review from tnull April 30, 2026 02:51
@ldk-reviews-bot
Copy link
Copy Markdown

🔔 1st Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@ldk-reviews-bot
Copy link
Copy Markdown

🔔 2nd Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

@Camillarhi Camillarhi force-pushed the multi-lsp-support branch 2 times, most recently from 0ab16a4 to a84f8e5 Compare May 4, 2026 22:13
Comment thread src/liquidity/client/lsps1.rs Outdated
Comment thread src/lib.rs
let cm = Arc::clone(&discovery_cm);
let logger = Arc::clone(&discovery_logger);
let ls = Arc::clone(&liquidity_handler);
discovery_set.spawn(async move {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably make sure to cancel/abort all tasks we spawn here in case of a shutdown.

Comment thread src/builder.rs Outdated
/// and select the appropriate protocol per request automatically.
///
/// The given `token` will be used by the LSP to authenticate the user.
/// `trust_peer_0conf` controls whether the node will accept 0-confirmation channels opened by this
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex:

  • Medium: trust_peer_0conf = false does not actually supersede Config::trusted_peers_0conf as documented. The event handler initializes allow_0conf from the global config and ORs in the LSP flag, so an LSP present in the global trust list is still accepted as zero-conf even when added with
    trust_peer_0conf = false. See docs at src/builder.rs:448 and behavior at src/event.rs:1294.

Comment thread src/event.rs
self.liquidity_source.as_ref().map(|ls| {
ls.lsps2_funding_tx_broadcast_safe(user_channel_id, counterparty_node_id);
});
self.liquidity_source
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex:

  • Low: Non-LSPS2-service nodes now call the LSPS2 service broadcast-safe hook for every FundingTxBroadcastSafe event. Because lsps2_funding_tx_broadcast_safe treats lsps2_service: None as “do not early-return,” it reaches the missing service handler path and logs an error for ordinary nodes/
    channels. See src/event.rs:681 and src/liquidity/service/lsps2.rs:164.

Comment thread src/lib.rs
res = discovery_set.join_next(), if !discovery_done => {
if res.is_none() {
liquidity_handler.mark_discovery_done();
discovery_done = true;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex:

  • High: A builder-configured LSP can become permanently unusable after a transient startup discovery failure. Node::start logs connect/discovery errors and still marks discovery done, while LSPS1/LSPS2 selection later only considers nodes with supported_protocols = Some(...) and never
    retries undiscovered nodes. Runtime add_liquidity_source also skips the same node ID, so users cannot recover in-process. See src/lib.rs:682, src/lib.rs:725, src/liquidity/client/lsps2.rs:506, src/liquidity/client/lsps1.rs:400, src/liquidity/mod.rs:126.

Do we see an easy way to retry discovery during the runtime? If not/if it gets too complicated I'd rather leave it for a follow-up.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will look into it and if it gets too complicated, I will do it in a follow up PR

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will handle this in a follow-up PR. I will have a background retry for failed startup discovery and also expose an API, something like retry_discovery(node_id), to cover the case where a user knows their configured LSP has rolled out a new protocol.

@Camillarhi Camillarhi force-pushed the multi-lsp-support branch 2 times, most recently from 606bcf2 to a55f8b3 Compare May 5, 2026 15:11
@Camillarhi Camillarhi requested a review from tnull May 5, 2026 15:29
Copy link
Copy Markdown
Collaborator

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems CI is unfortunately failing right now (CLN failure is unrelated though).

Replace per-protocol single-LSP configuration `LSPS1Client` and `LSPS2Client`
with a unified `Vec<LspNode>` model where users configure LSP nodes via
`add_liquidity_source()` at build time or runtime and per-LSP protocol
support is discovered via the LSPS0 `list_protocols`.

- Introduce a per-LSP `trust_peer_0conf` flag to `LspConfig`/`LspNode`
  structs that controls whether 0-conf channels from that LSP are accepted
- Add LSPS0 protocol discovery `discover_lsp_protocols` with event
  handling for `ListProtocolsResponse`
- Update events to also use each LSP's `trust_peer_0conf` flag
  when deciding whether to allow 0-conf channels
- Replace `set_liquidity_source_lsps1` and `set_liquidity_source_lsps2`
  builder methods with a single `add_liquidity_source()` that takes a
  `trust_peer_0conf` flag
- Rename `set_liquidity_provider_lsps2` to `enable_liquidity_provider`
- LSPS2 JIT channels now query all LSPS2-capable LSPs and automatically
  select the cheapest fee offer across all of them
- Spawn background discovery task on `Node::start()` and expose a watch
  channel so dependent flows can wait for discovery to complete
- Add a new `Liquidity` handler `Node::liquidity()` exposing `add_liquidity_source()`
  API for adding LSPs at runtime, and `lsps1()` for the existing LSPS1 surface
@Camillarhi Camillarhi force-pushed the multi-lsp-support branch 2 times, most recently from 0c09af8 to b5a5fa4 Compare May 7, 2026 11:11
Remove the `Ignoring` variant now that the liquidity source is
always built, so the enum and its match arms are now pure
overhead. Replace it with a struct that holds the `LiquiditySource`
directly and have each trait method delegate straight to
`liquidity_manager()`.
@Camillarhi Camillarhi force-pushed the multi-lsp-support branch from b5a5fa4 to 3c80fdc Compare May 7, 2026 12:18
@Camillarhi
Copy link
Copy Markdown
Contributor Author

Seems CI is unfortunately failing right now (CLN failure is unrelated though).

The failing CI has been resolved

@Camillarhi Camillarhi requested a review from tnull May 7, 2026 13:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants