why is the response even giving you whole events anyway? surely it can just be
well, you know, idk about you but there's these things called "paths" and "parameters" in http REST interfaces that you can do things like "set sort order" and ... well, yes, you can put things like time windows in there, but it's a limited area for substantial lists of things, in fact, yes
yes, i will put since and until in the parameters of the`/filter` endpoint and no, just go away with your pagination page limit blah blah blah we give back only lists of events, you asked for events, you take that list and use it, why complicate the relay for limited use cases
no, it's just an API detail
instead of returning events, the filter and fulltext search endpoints are specced to return simple lists of events in whatever expected sort order (i'm thinking to put the since/until/sort as parameters to the endpoint, because currently they are by default ascending order, which people may not want, in theory they could even be further sorted by npub, or kind
the reason for changing the API to only return event IDs is that pushes the query state problem for pagination back on the client, the relay doesn't need to additionally keep track of the state of recent history of queries to enable pagination, and i don't like that shit anyway because it inherently is inconsistent, as in, the query could return more events at any moment afterwards, so what do we do if we push pagination on the relay? do we make it update those things? then the client will get out of sync as well
implicitly any query with a filter that has no "until" on it searches until a specific moment in time, the time at which the relay receives the query, and the identical query 10 seconds later could have more events at least in the space since that time, not to mention it may get older events that match pushed to it by spiders or whatever in between
so, i say, fuck this complication, you just make an index on event IDs to event serials and then you search the indexes created by the filter fields, and then find the event ID table and pull every one of those out, and return them sorted in either ascending or descending order that they were stored on the relay (which is mostly actual chronological order)
idk, maybe i should add a timestamp to that index so this invariant can be enforced
anyway, i'm interested to hear other opinions about why the relay should implement filter api differently than i described, but i have thought a lot about it and i'm leaning very much towards returning IDs so the client manages their cache state instead of pushing that on the relay to give people their precious pagination
i already got too much complexity in here
making a custom search to enable faster processing of social graphs is a possibility, but it would be a new endpoint
unlike the silly nips scheme, with meaningless numbers, the APIs i envision are named, and use a standard naming scheme, so you could have a custom endpoint that does some kind of specific ordering as per the requirements of vertex
like, i'm envisioning things like a query that specifically is scanning profile/mute/follow lists and a specific key structure that lets you implement multiple sorts of these on the relay side before actually fetching them, like clustering the results so you get them in a sequence profile/follow/mute for each one, but this is already a simple one based on an author sort, something that was in my mind
this one requires a second index of pubkey to profile event so you or something along these lines, i was thinking about this, for the filter endpoint, to enable sorting them by pubkeys as well
a query that is specialised for social graph analysis totally could be done, just tell me what you think it should do and i can come up with a key layout that enables this search to be optimized for
this is why i'm prompting you to think about what you think a helpful API for your task would look like, because after i'm done making the basic replacement for filter search and HTTP for everything else using nip-98 and optionally JWT, this is the kind of thing i can see becoming useful
right now, #realy is a bit messy in the sense that things are all still a bit jammed together in ways that they shouldn't be, and some things are separated and replicating things in ways they shouldn't be
the ideal situation is where you can define a simple single source file that specifies what parts are available, so eg, we have a standard NIP-01 implementation, and added to that is a spider that farms the whole nostr network for this data, and then it exposes protected endpoints that yield search results that precisely fit the needs of vertex
so, yeah, from what you are describing, right off the top of my head i can picture something like an endpoint called `/directory` which takes a parameter of the last-updated-timestamp that you are interested in (as your database already has everything to that moment) and it spews back all of the event kinds newer than that in one big shebang and that funnels into your graph generation pipeline
Showing page 1 of
1 pages