#realy #devstr #progressreport
i have now fully separated the websocket handling from the main relay code
the relay only concerns itself with saving events and by use of the magic of interfaces, the parts of the server needed by both the socket api and the http api are now passed into their packages without creating a circular dependency, and making the points of contact very clear, and very cleanly separating the two
the last step, which will require a version bump, is to now make a bunch of build tags and some stubs that noop whichever of the two apis is not included in the build, and there will be the possibility of deploying either the legacy, hybrid, or new API without cluttering your server's memory and disk space with the parts you aren't using
well, there is still going to be a map and mutex for the socket api that is shared by each instance that is created to serve a socket client, but it will only be allocated, not populated, if the socket api is not running, it has to be at the relay level because each socket shares this state in order to delete themselves from the list, for the subscription management
so, probably not really that close to finished the task, really the subscription processing should also be forked out into the socket api and http api for each of their relevant parts, and then it will be fully separated and isolated
i think i will need to push out the actual part where the subscription is sent into each of the apis fully, but the part that maintains the list of subscriptions, like the client list, kinda has to be common as the relay itself is a web server, and either sends a request to be upgraded for socket, or forwards the request for the http
all i know is i think i need to go eat some food
#realy #devstr #progressreport
not really a big thing, just been busy refactoring the HTTP API so that it's not tightly coupled with the relay server library, separating the websocket stuff from the http stuff
now at v1.14.0 to signify that it's been refactored, i unified the huma API registration into a single operations type which means i don't have to touch the server code to add new http methods
also in other news, i have found that i can't get more than one more year of realy.lol so i've migrated the repository to refer to https://realy.mleku.dev , set up the vanity redirect for the new, the old code will still find its actual hosting on github just fine but once you have it and check out the latest on the `dev` branch it will be all pointed to https://realy.mleku.dev
same name, different DNS... and i've extended the mleku.dev, which i also use for my email and some general things, to expire in 2029 so i won't have to think about that for some time
now, not sure what i should do next, i just had wanted to make the http api more tidy so making an interface and popping all the http methods into one registration makes things a lot neater
ah yes, i need to start building a test rig for this thing, and probably it will be much like mikedilger's thing but will also test the http API
so, back to the fiat mine i guess... the project i'm working on is getting close to MVP, practically there, just a bit of fighting with the openapi client code generator for new methods that are needed for the spider's efficient fetching of updated data
#realy #devstr #progressreport
i just got done adding a feature for version v1.12.0 - a live IP blocklist
it's actually a whole configuration system, but i just wanted specifically a block list that didn't require me to add more nonsense to the configuration, but also that let me explore uses for the HTTP API that you can now see at https://mleku.realy.lol/api
blocklist is just the first item, quite possibly a lot of other parts of the configuration will go into there, if they are settings that can be instantly active and don't change things in an irreversible way, like the database configuration for event storage encoding, or similar things
there will be more of these kinds of configuration elements added, for now, it is nice for the use case of blocking spiders that make pointless, retarded, repetitive requests and waste my VPS CPU, memory and bandwidth
#realy #devstr #progressreport
not a big report today, though i've been busy doing things
http api now has import/export for admin stuffs, and i just added a new feature to rescan the events to update the indexes for events, so when i add new ones, they are generated for all events again
why i did that is that i have added a new index to enable searches that just return the event ID
but i want to be able to avoid unmarshaling the events to get their IDs, so i made a new index that has the event ID
then i realised that needs the pubkey to be there too, so it can be screened of muted authors so they aren't sent pointlessly and IMO disrespectfully to users
i hadn't finished designing this index but even if i did modify it, by regenerating the index from the pre-pubkey addition i would have then enabled a new index, and indexes are kinda nice like this, the data is neutral, if you change the logic, and it needs new data in the indexes, you can just change them, they are internal private but they need to be updated when new indexes are added
anyway, this is all in aid of implementing a http filter that is basically like a regular filter but it rips out the search and ids fields because ids are a separate API, and i have to implement that too, actually, but there is an order of operations in these things
first you encode, then you decode
first you get a list of event ids
then you write the code to get the events for them back out
almost there
maybe i can finish this filter tonight
#devstr #realy #progressreport
i have been in the process of building the new HTTP protocol but i wanted to first actually upgrade a production system to whatever new code i have got running to sorta make sure that it's really working, with a reasonable amount of actual users and spiders making connections to it
well, anyway, the purpose was pretty small, mainly
there is now a new index, which consists of an event ID, and the event created_at timestamp
for now, the production system has just made these for every event it has, and will generate them for every new event that comes in
but the reason for it was so that as soon as i update to the full finished MVP implementation of the protocols, that the necessary indexes are already in place
i have basically already implemented the fetch by ID endpoint and the event publish via http, the last piece is the http `/filter` endpoint, which basically provides for doing a search based on kinds, authors and tags.
the "search" field is a separate thing anyway, and is intended for full text indexes and ... well, DVMs, which are basically what i'm superseding btw
these return only the event IDs and to enable that, i needed to create a new index that stores the event ID and created_at timestamp into an index, so i can find the event by index, then use the index to find the FullIdIndex entry and then from that i can assemble a list and sort it either ascending or descending based on the timestamp in the index
without having to decode the event data, that's important, because that's an expensive operation when those two fields are all i need to get the result
and then the caller can then know, that at the moment the results were delivered, that is correct for the state, and it can then segment that list, if necessary, and then request the individual events that it actually needs, which is a big bandwidth saving as well as enabling simpler pagination by shifting the query state to the client
of course, clients can decide to update that state, and because they already have the same query's results, if they store it, they can even then also see if new event spopped up in between, as chaos tends to allow (clock skew and network latency) but the client doesn't have to have those events be thrown at it immediately as is the case with nostr standard nip-01 EVENT envelope responses on the websocket
now, they can literally just ask for a set of event IDs, and have them spewed back as line structured JSON (jsonl) and voila
far simpler to parse and understand for a humble web developer
#devstr #progressreport i made some fixes to #realy today
instead of blocking clients asking for DMs when it's not authed, it sends auth and just doesn't return them until it gets auth
in many cases, clients do that after i post a new note because the nip-11 says "restricted writes" and that makes them actually auth and once that socket be authed, i can read my messages, not that i use them, becos they don't work, nostr DMs are a joke because nostr sockets are a dumb idea and no client hardly barely implements auth properly still and the specification is as transparent as the lower atmosphere of jupiter.
anyway, so, there seems to be some improvement, jumble is now happily posting and fetching notes properly again after it was broked for a bit of today
i'm feeling quite exhaustage also
on other fronts, i am close to finishing the JWT token verification, i am just gonna say that the #golang #JWT library is horribly documented and i basically had to poke at several things before i even could read a clear statement that i needed some function to return a goddamn key
yes, that will be the one that fetches the 13004 event if it finds it, and decodes the pubkey and then uses that
just writing the test at this point, using the key directly from the generation step
compared to the btcec/decred bip-340 library, the API of the conventional "blessed" cryptographic signature algorithms are a pile of dogshit covered in shredded car tires that has been set on fire
you think there could be anything more stinky than a tire fire? yes, a dogshit-and-tire-fire... that is the state of "official" cryptography, compared to bitcoiner cryptography, which is sleek and svelte and sane
no, i'm just kidding, bitcoiner cryptography except the C library is abominable, because roasbeef is a jerk who has repeatedly stopped me contributing to either btcd or lnd because he is a smartarse jerk and you know he's a jerk because he isn't here to ask me for an apology
fuckin assclown
idk how he came to have anything to do with the invention of the lightning protocol but i'm sure as fuck certain it was only that he built LND and nothing more, because his code is awful, beyond awful, and i hate him also because he doesn't seem to care that his repos don't have working configuration systems
anyhow
probably i will finish making the JWT auth scheme validation code tomorrow and then wrap it into the http library and voila, normie clients with a token will be able to access auth-required relays without either websockets or bip-340 signatures, instead, a revocable key token bearer scheme that lets another key stand in for the auth purposes
also, did i mention that roasbeef is a dick?
i would love to have one of his fans rebuff me, that would make me feel like i actually had an interaction with him that wasn't him stopping me doing useful things for bitcoin
#catstr #mochi #progressreport
the gingevitis is not easy to heal... teeth get nasty abcesses under them that take time to squish out with massage
the bbq beef is not so great at this when it has too much char, as the charred amino acids contain a lot of glutamate which makes the nerves tweak and this triggers relase of inflammatory cytokines
so i'm going to quit with the char bbq, it's too complicated to do anyway without a ready source of ample wood, just gonna go with gas and frying stuff in butter, what i get out of it is kinda cool but i also have pain issues from the glutamates from excess char that makes my bottom right rear broken molar also flare up and hurt and protrude somewhat, and yes it makes me drool especially when sleeping, so it's basically the same thing
in general, he is starting to show less yellow stain in his feet and back legs in general, still a bit stained, but it's reducing, and i think it's because he's recovering from the sinus infection that is the source of the yellow mucus
so, he's still tongue out, and drooling, but i think he's getting better
he ate a big two servings of 100g pressure cooked mushy meat preserves this morning, but this afternoon i have it in my mind to cook up some more shellfish in butter and share some of it with him, but try to cook it a bit longer so it dries out a bit and is goodly chewy
#realy #devstr #progressreport #gn
not a big major thing, just signing off with what i have got dose so far...
there is now a little registry that stores a mutex protected map of capabilities, the scaffold of the HTTP handlers, and for each handler a a registration init function that currently is empty but will be populated to convey the necessary information about how a simplified nostr API call can be used and what it requires or other optional things
i will have to write some new API for the database to implement these, as, for many queries it will not be necessary to decode the event immediately if it doesn't require a second level match in addition to the base index match generated by the filter, as the filter/fulltext query types only return lists of event IDs instead of directly returning the IDs
the purpose of this is to enable pagination without the relay needing to be made more complex with a query state cache because dynamic data has to be either updated to match new events that come in, or the results of a request have to be cached for a period of time to enable clients to selectively request segments of a result set
this way it just throws that back to the client, and they can then formulate events requests that contain the segments of the result they received in order to load the data for display. in fact it enables the client to literally preload ahead of the scrolling of the user one by one as a new item is revealed, it can fetch the next one ahead well before the user gets to it, in accordance with the scrolling motion even
not stuff that pagination even helps you do, though it doesn't really apply to desktop/web aps so much as they don't have the concept of "flick" scrolling
anyhow, back to the fiat mine tomorrow, building a different other database and index scheme and API, but for sure i will be fleshing out these methods as i go
it will be possible, once i implement these, to add the ability to make these queries using the `curdl` curl-tool that does nip-98 HTTP auth, to send it queries with the relevant Accept header data, RESTful path and POST string containing the JSON of the request
yes, this will be a bit like what `nak` does, except without the websockets
well, the subscribe could probably do it *with* the websockets as well, but that would require me to extend curdl to be able to print the received event ID hashes as JSONL, which could of course be part of some shell script or fed into some other program
in fact it does kinda make nak redundant as far as programmability for shell scripting since the APIs are on the relay simply paths and json in http request body, for most of the things anyway, not all, like, i'm not gonna implement bunkers lol
anyway i'm kinda betting on the idea that once there is a mostly HTTP way to access nostr events that this design will proliferate due to its simplicity
#realy #devstr #progressreport
just been tidying up in preparation for implementing the simplified nostr protocol spec https://github.com/mleku/realy?tab=readme-ov-file#simplified-nostr
found a few things that were not properly dealt with, mainly related to the new public readable option, and the request filtering
important change now in v1.9.5 (current at this time) is that it now correctly is refusing to process queries for DMs and Application Specific Data and related privileged event kinds with no auth
there is already one working path endpoint, `/relayinfo` which i haven't documented yet, it's just the same as nip-11, but specific to the new protocol, because it's on a path
one key thing that will be happening even with subscription based websockets APIs is that they will do auth using nip-98 at the request level, and the `/capabilities` endpoint will specify if an API requires auth or not, or such finer details like "for kinds a, b, c" as it will for the filter endpoint on privileged events as i was talking about just above
ok, getting to it
there will be a full, sane http RESTful API put together soon, which will be very nice for resource constrained devices that can't do websockets, mainly this is going to mean #alexandria can be deployed to read books on an ebook reader
but i think that general adoption of this protocol extension should take place anyway, because it so greatly simplifies implementation for the majority of types of queries and publication requests that clients use, meaning nostr-noob devs will be able to get something working sooner
well, at least for #realy anyway y'alls other peoples gonna have to catch up at that point
#realy v1.8.2 just dropped
the main new feature is now the admin HTTP ports use nip-98 authentication instead of HTTP-Basic, and there is a new command in`cmd/curdl` that expects your prescrideb nsec in NOSTR_SECRET_KEY environment variable and it can do`get` for commands that return data (like export) and`post` lets you push up a`jsonl` file to dump a load of events into the database
took me a while to figure out how to get the upload working, some glitch with http and indefinite file sizes so i just discarded the ability to feed it events from localhost
there is ways to do indefinite sized uploads via http but they are complicated and they are simple to do via websockets, so if i feel the need to enable piping stuff up to realy, it will require a new route that upgrades to websocket to do this, most probably... or http/2 or something, idgaf, isn't what i already set it up to do, just wanted working import/export with nostr native auth for all admin
the`get` mode lets you issue commands via http path and parameters, nothing special has been implemented yet, no fancy swagger openapi bullcrap, and probably never will be, but it has proper security for authentication of the admin remote access of the features it already has
now i can move to doing the http API i dreamed up which i will be collaborating with some client devs in the future to integrate, because for simple queries and uploads, http is far less code and bullshit than having to also deal with sockets, and it makes it really easy for any standard web dev to build an app that talks to this API because it's not some kooky hybrid chimera big ball of mud
#devstr #progressreport
i totally need a break after this morning's session
#realy #devstr #progressreport
nip-40 - expiration timestamps, is not implemented for realy
it is a simple opportunistic delete that occurs the first time after the expiration passes that the event is requested
an active expiration garbage collector is a far more complex thing to build, there is a basic GC in the codebase but it is not worth the time to implement this without a pressing need in its deployment
it also now can be enabled to make it public readable, which means it returns results for queries without auth, though i think it usually asks for it anyway, if it's enabled, it just will process queries...
there is a necessary request policy change too - filters that only contain time boundaries and no kinds, authors, tags or id's are now ignored unless the user is authed, and only the direct whitelisted (owner's follows) users get fully non-ratelimited access to all queries, the non-direct (guest) - follows of follows - have a rate limit on requests that can burst at 5 in a second and thereafter limits to 1 req per second, which should be sufficient for properly constructed queries for public documents such as what #alexandria will need
#realy #progressreport
been doing a bunch of debugging using nostr:npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c 's https://github.com/mikedilger/relay-tester
so, i finally got the issue of inclusive since and until fixed on realy - and probably it has fixed a heap of specifically structured filters that didn't get results to work before, to now work, the problem was that the absence of "until" in the filter meant it was defaulting to 0, which means that it wasn't matching anything, because until is a "less than" operation and nobody has events with negative timestamps (they are invalid)
it also is mostly fixed for a bunch of other things, delete by a-tags now is properly working but i am pretty sure there is a test right at the end of the relay-tester that tries to save back an old version of a parameterized replaceable event and the difference and assumption that he's making is that replacement isn't the same as a delete operation, but in my implementation, the deleted replaced event has a tombstone so it can't be saved again, even if you delete the latest version, which would normally block it because of the timestamp being older for the same a tag (kind:npub:dtag)
there has been quite some upgrades in several of the dependencies of realy that i think have led to a massive performance improvement, it is definitely way faster than it was before, and i hadn't optimized anything, it's most noticable in the performance of the database, which is now gone from fast to instant, i can import a stack of like over 100k events from localhost now and it swallows it, then regenerates the access control list in under 3 seconds
possibly there is a heap of other things from the golang.org/x libraries that are used in various places that haev got hella faster as well
still a bunch of behaviours i can't account for, like, not properly showing relay-based events for relay info pages notes lists on coracle, but sorta working on nostrudel now, i probably should try to dig into this, it's probably some similar error of semantics in the filter conversion to a set of query indexes that i made with the until field
time to do some other thing though, v1.6.1 realy is available now and is definitely better than past versions... and i also made some fixes to nudge go compilers to update to the new versions after mangling all the old version tags... so if you are having issues, just delete and clone it again, should be all good on dev branch now
Showing page 1 of
2 pages