so, that would be reply and reaction events
you would have to scan the entire event database of a relay to do this, and it would need to create a very large graph table compiling a count on each link as a weight on the vertex between two nostr:npubs
it would be a fuzzy value, also... you could probably even have the relay add the weight value to the graph table as soon as the event comes in, computing the graph you probably want to snapshot that table and then create a table of the weights that exceed some criteria for threshold, the top 100, 200, something like this
i've thought about it, as you can see, and i think it would be a great thing to add to augment the relay API - so the client could then query for each npub in a set of results to get their vertex weights and set a cut-off level and not show those events, or it could be done on the relay as an extra api extending the filter with a graph weight cut-off spec of some sort
the thing is that the follow/mute lists make a neat and relatively cheap mechanism of whitelisting users who are probably not spammers from a designated list of users (subscribers, presumably)... this is the purpose of the WoT whitelisting in #realy
this other stuff would be an extended API and clients would be able to also interact with that by an extended filter request that includes a threshold, it would be better to do it on the relay than to waste all that bandwidth and processing on a slow shitty client
#realy #devstr #progressreport
not a big report today, though i've been busy doing things
http api now has import/export for admin stuffs, and i just added a new feature to rescan the events to update the indexes for events, so when i add new ones, they are generated for all events again
why i did that is that i have added a new index to enable searches that just return the event ID
but i want to be able to avoid unmarshaling the events to get their IDs, so i made a new index that has the event ID
then i realised that needs the pubkey to be there too, so it can be screened of muted authors so they aren't sent pointlessly and IMO disrespectfully to users
i hadn't finished designing this index but even if i did modify it, by regenerating the index from the pre-pubkey addition i would have then enabled a new index, and indexes are kinda nice like this, the data is neutral, if you change the logic, and it needs new data in the indexes, you can just change them, they are internal private but they need to be updated when new indexes are added
anyway, this is all in aid of implementing a http filter that is basically like a regular filter but it rips out the search and ids fields because ids are a separate API, and i have to implement that too, actually, but there is an order of operations in these things
first you encode, then you decode
first you get a list of event ids
then you write the code to get the events for them back out
almost there
maybe i can finish this filter tonight
this doesn't happen with #realy auth implementation, i'd be almost certain there's something wrong with your implementation as well
my relay needed changes to enable full public read with selective auth required, it edits the filters to remove the stuff that is not allowed without auth and then returns the result and then sends a CLOSED auth-required and this works on coracle and nostrudel as well as jumble... jumble was fine with how realy was doing it but the others were not because they were getting stuck because they have queries that mix DMs with allowed event kinds and the socket was waiting for them
so i just filter it out, then tell the client to auth after giving back what results it can, and it doesn't stall the whole socket, just blocks reading
what happens often also is once a client auths to publish an event that socket is unlocked and doesn't go through this pathway
this is why i have been saying for a while now, a few months maybe, that the design of nostr using sockets for everything makes it a lot harder to reason about, and the auth spec does not make it clear that authorization is a PER SOCKET thing, that is, it is a state, the socket is not authed, or it is, there is no "authed for some request" semantics and that confuses a lot of people, and the spec is not clear about it
auth is the bread and butter of the internet... without it there is not accounts or the ability to monetize anything, seems like y'all are starting to finally grasp this, but you still don't seem to be quite there
a great example of the #interop failure of #nostr #devstr
i mute a user in jumble
it is still visible in coracle and nostrudel
i mute it also in coracle
it is still visible in nostrudel
i close coracle and nostrudel tabs forevermore, i'm not tolerating this pollution of my computer's memory and my screen's pixels and the waste of CPU and memory (ie, electricity bill and damage to my hardware) that this all entails
because only #jumble actually is respecting me, as a user, with the small exception that #coracle lets me totally hide any notice of the trash i muted and if anyone wants to actually interact with me, they should stop being a twat in-group brainwashed drone, once i mute you, it's pretty much gonna be forever, i'm provoked to unmute people that are in a mode of behaving civilly but honestly, i wish i wasn't, nostr:npub1syjmjy0dp62dhccq3g97fr87tngvpvzey08llyt6ul58m2zqpzps9wf6wl please enable fully hiding muted notes, instead of just wasting almost as much space on my screen and wasting my time downloading it
#realy does not send you events at all from users you muted, so long as it has your mute list to consult before returning your results
this should just be standard. no bandwidth, no compute, no memory, and no dfisplay bandwidth wasted on shit you don't care about
today i am learning how to actually write a full, proper #http server using the #golang standard library for handlers and multiplexer (router)
looking at all these silly things, gorillamux, go-ch, and i just wasn't even considering gin and all the rest idgaf i want to learn the base of this, i already have code that does this shit but i need to integrate this generated openapi 3 API i've designed correctly
i'm doing it for #realy first, but next week i literally have to do this exact same thing for my #fiatmine job, so i'm killing two birds with one stone today, once i figure out how to plug in this generated code, then clients can be built from the same spec via code generators, which massively reduces adoption friction - i mean, this is why REST dominates the industry now, and it's really needed to eliminate adoption friction for #nostr because the way it stands right now, the SDKs that exist are awful, inconsistent, inefficient, and usually buggy NDK i a bastard piece of shit, so i'm told, and my colleagues have forked it and are superseding it with WASM based C++ generated code
what i'm building here will be a huge icebreaker for nostr apps to integrate with web 2.0
i'm not so much concerned about the lefts running their own relays, idgaf about what those drones are doing, but what i am concerned about is leveraging the eventbus/pubsub architecture of nostr to facilitate real world deployments and have people paying nostr devs to build this stuff, instead of begging from bitcoin whales
i just spent much of today and yesterday afternoon learning to write openapi specs, and i have built out a whole thing now, got it generated, not sure if it's exactly what i want, gorrilla/mux, but i am always hearing this name so ok trying that first
found a nice tool to serve up an embedded swagger UI through it, i was horrified to see that it was 11mb for the standalone distribution (that doesn't depend on NPM's servers) but ok, this is not unbearable, a little slow updating but i don't think i particularly care about it being more current than 5 months ago:
https://github.com/swaggest/swgui
so i will now have a generated, easy to make clients with it API interface for this new nostr HTTP interface i'm building, and people will be able to poke at the API without having to leave the comfort of their browser
also did i ever mention that i don't care to submit a PR about what i'm building? that's right, because this thing has its own documentation endpoint and every instance of #realy will be serving the docs of the version of the spec it uses
no need to wait forever for a bunch of bikeshedding to go on and someone to decide "oh nah, we don't like this" even though i've got another relay dev colleague using it and three client devs deploying it in their app
flinging cats into nests of strutting pigeons perched on chess boards they think they have had victory on is my idea of fun
#devstr #swagger #golang
#devstr #realy #progressreport
i have been in the process of building the new HTTP protocol but i wanted to first actually upgrade a production system to whatever new code i have got running to sorta make sure that it's really working, with a reasonable amount of actual users and spiders making connections to it
well, anyway, the purpose was pretty small, mainly
there is now a new index, which consists of an event ID, and the event created_at timestamp
for now, the production system has just made these for every event it has, and will generate them for every new event that comes in
but the reason for it was so that as soon as i update to the full finished MVP implementation of the protocols, that the necessary indexes are already in place
i have basically already implemented the fetch by ID endpoint and the event publish via http, the last piece is the http `/filter` endpoint, which basically provides for doing a search based on kinds, authors and tags.
the "search" field is a separate thing anyway, and is intended for full text indexes and ... well, DVMs, which are basically what i'm superseding btw
these return only the event IDs and to enable that, i needed to create a new index that stores the event ID and created_at timestamp into an index, so i can find the event by index, then use the index to find the FullIdIndex entry and then from that i can assemble a list and sort it either ascending or descending based on the timestamp in the index
without having to decode the event data, that's important, because that's an expensive operation when those two fields are all i need to get the result
and then the caller can then know, that at the moment the results were delivered, that is correct for the state, and it can then segment that list, if necessary, and then request the individual events that it actually needs, which is a big bandwidth saving as well as enabling simpler pagination by shifting the query state to the client
of course, clients can decide to update that state, and because they already have the same query's results, if they store it, they can even then also see if new event spopped up in between, as chaos tends to allow (clock skew and network latency) but the client doesn't have to have those events be thrown at it immediately as is the case with nostr standard nip-01 EVENT envelope responses on the websocket
now, they can literally just ask for a set of event IDs, and have them spewed back as line structured JSON (jsonl) and voila
far simpler to parse and understand for a humble web developer
yay, i'm gonna call it my #GA message - finished my hours in the pit of the fiat mine today, almost completed a new index for the database and generating it out of the records in the database
this afternoon, i will be completing the simplified filter endpoint for #realy, maybe i'll manage to even feed the output of a filter into the events endpoint and see the full HTTP flow for a simple filter query and retrieving the events
after talking with nostr:npub176p7sup477k5738qhxx0hk2n0cty2k5je5uvalzvkvwmw4tltmeqw7vgup a little i decided that i need to add an element to the event ID index i am creating that stores the timestamp with it so it can be used to sort the result keys and then send the list of event IDs back in either ascending or descending created_at time (and a limit parameter also, that can be used with the last (smallest or largest depending on sort order) to actually fetch pages... still no guarantee that is going to be the same actual result as the first time but anyway
the event ID index is required in order to enable finding event IDs and the only criteria that matters for this search is the timestamp so it will have this now
ok, off to werk then
just remember: a hobby is just work that nobody cares about, recreation is mostly just sleep and ablutions so what you do in your downtime is just work that you value
if one wants to transition to a new field of work, such as i do, to go all in on nostr dev, then i have to start by doing nostr dev as my hobby, and the great amount of valuation i received in response to my pimping my crowdfund page for it on #geyser was very encouraging that i may be close to finding a legitimate business use case that has money to keep paying me to do specific things with it and let me quit working on shitcoin social networks
this is why i'm prompting you to think about what you think a helpful API for your task would look like, because after i'm done making the basic replacement for filter search and HTTP for everything else using nip-98 and optionally JWT, this is the kind of thing i can see becoming useful
right now, #realy is a bit messy in the sense that things are all still a bit jammed together in ways that they shouldn't be, and some things are separated and replicating things in ways they shouldn't be
the ideal situation is where you can define a simple single source file that specifies what parts are available, so eg, we have a standard NIP-01 implementation, and added to that is a spider that farms the whole nostr network for this data, and then it exposes protected endpoints that yield search results that precisely fit the needs of vertex
so, yeah, from what you are describing, right off the top of my head i can picture something like an endpoint called `/directory` which takes a parameter of the last-updated-timestamp that you are interested in (as your database already has everything to that moment) and it spews back all of the event kinds newer than that in one big shebang and that funnels into your graph generation pipeline
Showing page 1 of
9 pages