the other thing that is required also is that relays need to have configurable garbage collection strategies, so that you can have master/archival relays which get huge storage, and smaller ones that prune off stuff that have stopped being hot items to contain their utilization, so, archive relays and cache relays.
and then, yes, you further need a model of query forwarding so a cache relay will propagate queries to archives to revive old records, the caches could allocate a section of their data that is just references to other records, stored with the origin of the original, now expired event, that also is maintained within a buffer size limit, so they know exactly which archive to fetch it from.
lots of stuff to do... i started doing some of this with the original "replicatr" my first attempt at a nostr relay, implemented a whole GC for it, wrote unit tests for it... the whole idea was always about creating multi-level distributed storage. unfortunately no funding to focus on working on these things, instead i'm stuck building some social media dating app system lol
this is one thing that sockets can do better, because they don't necessarily send events all at once. i wrote the filters previously such that they sort and return results all in one whack, i think what you probably want then is for each filter, in the response you identify the query by a number, and the client always maintains an SSE channel that allows the relay to push results.
with this, the query can then propagate, all the results that are hot in the cache are sent, and if there was events that required a query forward, those results can then get sent to the client over the SSE subscription connection.
i really really need to have some kind of elementary event query console to do these things, a rudimentary front end. i probably should just make it a TUI, i think there is at least one existing Go TUI kind 1 client... i should just build with that, instead of fighting the bizarre lack of adequate GUIs for Go
Showing page 1 of
1 pages