0
0
3

1
0
3

1
0
3

1
0
3

0
1
3

1
1
3

0
1
3

0
0
3

1
0
3

0
0
3

this doesn't happen with #realy auth implementation, i'd be almost certain there's something wrong with your implementation as well my relay needed changes to enable full public read with selective auth required, it edits the filters to remove the stuff that is not allowed without auth and then returns the result and then sends a CLOSED auth-required and this works on coracle and nostrudel as well as jumble... jumble was fine with how realy was doing it but the others were not because they were getting stuck because they have queries that mix DMs with allowed event kinds and the socket was waiting for them so i just filter it out, then tell the client to auth after giving back what results it can, and it doesn't stall the whole socket, just blocks reading what happens often also is once a client auths to publish an event that socket is unlocked and doesn't go through this pathway this is why i have been saying for a while now, a few months maybe, that the design of nostr using sockets for everything makes it a lot harder to reason about, and the auth spec does not make it clear that authorization is a PER SOCKET thing, that is, it is a state, the socket is not authed, or it is, there is no "authed for some request" semantics and that confuses a lot of people, and the spec is not clear about it auth is the bread and butter of the internet... without it there is not accounts or the ability to monetize anything, seems like y'all are starting to finally grasp this, but you still don't seem to be quite there

1
1
3

0
1
3

0
0
3

today i am learning how to actually write a full, proper #http server using the #golang standard library for handlers and multiplexer (router) looking at all these silly things, gorillamux, go-ch, and i just wasn't even considering gin and all the rest idgaf i want to learn the base of this, i already have code that does this shit but i need to integrate this generated openapi 3 API i've designed correctly i'm doing it for #realy first, but next week i literally have to do this exact same thing for my #fiatmine job, so i'm killing two birds with one stone today, once i figure out how to plug in this generated code, then clients can be built from the same spec via code generators, which massively reduces adoption friction - i mean, this is why REST dominates the industry now, and it's really needed to eliminate adoption friction for #nostr because the way it stands right now, the SDKs that exist are awful, inconsistent, inefficient, and usually buggy NDK i a bastard piece of shit, so i'm told, and my colleagues have forked it and are superseding it with WASM based C++ generated code what i'm building here will be a huge icebreaker for nostr apps to integrate with web 2.0 i'm not so much concerned about the lefts running their own relays, idgaf about what those drones are doing, but what i am concerned about is leveraging the eventbus/pubsub architecture of nostr to facilitate real world deployments and have people paying nostr devs to build this stuff, instead of begging from bitcoin whales

0
0
3

0
0
3

0
0
3

#devstr #realy #progressreport i have been in the process of building the new HTTP protocol but i wanted to first actually upgrade a production system to whatever new code i have got running to sorta make sure that it's really working, with a reasonable amount of actual users and spiders making connections to it well, anyway, the purpose was pretty small, mainly there is now a new index, which consists of an event ID, and the event created_at timestamp for now, the production system has just made these for every event it has, and will generate them for every new event that comes in but the reason for it was so that as soon as i update to the full finished MVP implementation of the protocols, that the necessary indexes are already in place i have basically already implemented the fetch by ID endpoint and the event publish via http, the last piece is the http `/filter` endpoint, which basically provides for doing a search based on kinds, authors and tags. the "search" field is a separate thing anyway, and is intended for full text indexes and ... well, DVMs, which are basically what i'm superseding btw these return only the event IDs and to enable that, i needed to create a new index that stores the event ID and created_at timestamp into an index, so i can find the event by index, then use the index to find the FullIdIndex entry and then from that i can assemble a list and sort it either ascending or descending based on the timestamp in the index without having to decode the event data, that's important, because that's an expensive operation when those two fields are all i need to get the result and then the caller can then know, that at the moment the results were delivered, that is correct for the state, and it can then segment that list, if necessary, and then request the individual events that it actually needs, which is a big bandwidth saving as well as enabling simpler pagination by shifting the query state to the client of course, clients can decide to update that state, and because they already have the same query's results, if they store it, they can even then also see if new event spopped up in between, as chaos tends to allow (clock skew and network latency) but the client doesn't have to have those events be thrown at it immediately as is the case with nostr standard nip-01 EVENT envelope responses on the websocket now, they can literally just ask for a set of event IDs, and have them spewed back as line structured JSON (jsonl) and voila far simpler to parse and understand for a humble web developer

1
0
3

yay, i'm gonna call it my #GA message - finished my hours in the pit of the fiat mine today, almost completed a new index for the database and generating it out of the records in the database this afternoon, i will be completing the simplified filter endpoint for #realy, maybe i'll manage to even feed the output of a filter into the events endpoint and see the full HTTP flow for a simple filter query and retrieving the events after talking with nostr:npub176p7sup477k5738qhxx0hk2n0cty2k5je5uvalzvkvwmw4tltmeqw7vgup a little i decided that i need to add an element to the event ID index i am creating that stores the timestamp with it so it can be used to sort the result keys and then send the list of event IDs back in either ascending or descending created_at time (and a limit parameter also, that can be used with the last (smallest or largest depending on sort order) to actually fetch pages... still no guarantee that is going to be the same actual result as the first time but anyway the event ID index is required in order to enable finding event IDs and the only criteria that matters for this search is the timestamp so it will have this now ok, off to werk then just remember: a hobby is just work that nobody cares about, recreation is mostly just sleep and ablutions so what you do in your downtime is just work that you value if one wants to transition to a new field of work, such as i do, to go all in on nostr dev, then i have to start by doing nostr dev as my hobby, and the great amount of valuation i received in response to my pimping my crowdfund page for it on #geyser was very encouraging that i may be close to finding a legitimate business use case that has money to keep paying me to do specific things with it and let me quit working on shitcoin social networks

0
0
3

0
0
3

0
0
3

Showing page 1 of 9 pages