Silberengel @Silberengel - 13h
Or do you mean that you can click on a 30040 and see which 30040s it is nested in? That's something that can be computed.
elfspice @mleku - 13h
creating backlinks would require an indexing system making events to share the indexing system could be done but it's really a job for a relay
nostr:nprofile1qqs99d9qw67th0wr5xh05de4s9k0wjvnkxudkgptq8yg83vtulad30gpp4mhxue69uhkummn9ekx7mqpz3mhxue69uhhyetvv9ujuerpd46hxtnfduq32amnwvaz7tmgd9ehgtnwdaehgu3wd3skueqyluj3x
semisol @Semisol - 12h
my opinion is that nostr REQs are too limited, and to some extent purpose-specific indexes will emerge they should be standardized to avoid fragmentation, and this is one reason I was saying it should be easy to extend relays
elfspice @mleku - 12h
it's built in, you can construct filters for this... i was chatting with him the other day about various relay index related subjects, in fact, and really, to do more cool stuff with relays we need to have more indexes and probably a more advanced query mechanism i don't think we realy quite need to implement graphql exactly, but simply adding full result generation and pagination, and the necessary garbage collected query cache that is required to serve up the paginated results correctly in an efficient form i've been thinking about adding such a query mechanism to a NIP-98 authed HTTP endpoint, but it is quite a bit of extra work to cache queries like, i'm writing an index for npubs right now, as my next current task as a mechanism for compressing especially follow/mute lists down to very small lists of index keys instead of the whole npub so what i would have in mind is it accepts a standard query, and then gives you the metadata of the result, ie, total number of results, and it has a list of all the event serial numbers cached to a hash of the canonical form of the filter that generated it, and you can then ask it for result-per-page/page number and voila, pagination but it needs a second, temporary index, it could be kept in memory or it could be stashed in a flat file under a hash of the canonical formatted filter and yes, i already have a filter canonicalisation algorithm and have applied it somewhere, i forget exactly now, but it generates a truncated hash identifier of filters for some reason
well, we could make a new variant of the REQ maybe? if you want to make a proposal i will give it a good going over and make sure it's solid, perhaps we can get others to help with this part of the issue is to do pagination you need to cache queries and their results, so it must be spam-protected, and it's dependent on implementing a state cache for the queries