#realy #devstr #progressreport
after some fighting with the huma api to play nice with my easy-registering router - mainly was just realising that it should match on path prefixes not the whole path for the huma servemux, i finally had the HTTP API presenting
first things first, i wanted to move more stuff into dynamic configuration stored in the database, and first thing to do is enable configuration of the admins in order to restrict access to the admin parts of the API
i haven't fully finished it yet but it starts up wide open now, and you have to use the /configuration/set endpoint and drop at least one npub into the Admins field, and voila, it is now locked down
i have to first start by adding just one configuration to the environment, which is the initial password, which you put in the Authorization header field to allow access, this ensures that the relay is not open to the world should it be deployed on some far distant VPS that can be spidered by nasty people who might figure out quickly how to break it on you
once those two pieces are in place, i need to put back the nip-98 expiration variant generator tool, and then you can use this token temporarily to auth as an administrator, and tinker with the other admin functions, but mainly the configuration is what is most important priority
so, a nice afternoon's work, dragging a bit into the evening, but i got my nice router library working with huma API and based on the code from the original realy i will reinstate the whole functionality of it pretty quickly... and likely along the way i will probably find something to make a bit better, but i think overall it's fine as it is, it's just a bit clunky to use the export function in the scalar REST docs UI but with my nip-98 capable curl tool, nurl, you can just use that and basta
now is time for bed tho
#gn
#realy #devstr #progressreport
today mostly just finishing up polishing the rewrite, but this evening i wrote myself a nice HTTP path/header router and completed the publisher router
i'm aiming to make this thing as lean and modular as possible, so now i can make packages that do http handling (eg like the nostr nip-01 websocket API, or relay information page, or whatever) that lets them self-register themselves and disable registering them, optionally, by using a build tag on the source file containing the implementation
most likely i will roll that all into one base directory with a single file and build tag that pulls in the given type and registers it, and then the main relay will only call the router, which only will see the packages that got included by not being excluded from the build
learning a lot more about how to write http stuff in #golang - man, http routers are unbelievably retardedly simple, i just wrote a router, HBU? lol
no idea how much cruft i'm gonna be able to shred from the main realy codebase just yet, but this router i just made i know it shreds a big chunk of spaghetti and lets me isolate the parts cleanly... APIs will have their own handlers, and their own publisher, all linked to a main publisher, and then if the api is disabled, it just isn't used, and voila, legacy nostr, hybrid legacy/http or new-wave http
i also don't know how long this is going to take but it feels really good just sitting quietly and calmly and building this
or maybe this is #ragecoding
because when you are rage coding, actually you are in such a state of mind that you could cut shit like your eyes are frickin lasers
#realy #devstr #progressreport
i finally did it, i abstracted the handler for publishing to subscribers... it was really tricky to get it all straight it my mind how to do it but i eventually figured out that i needed to make a "Type" method for both the message types and the handler (i just embed it into the publisher implementations and then the third interface, the abstract one, uses it to identify the type of the message and match it to the type of the publisher, and voila
a few more things to fully complete the build-time capability to disable one or the other
first i need to put the publishers into their respective API implementations
next i create an interface for them that includes the publsher interfaces and maybe some other part somehow, maybe again just the type thing
then in the relay instead of explicitly specifying which of the interfaces are available, it calls the registry to get the list of ones that have run their registration at startup
then i can disable one or the other interface with a build tag, and it's all automagical
and it means then that the APIs become an interface and in the future i can make new apis really easily, by implementing those key interfaces that compose the api interface and add their registration and build tag and voila, everything loosely coupled and modular, as it should be
#realy #devstr #progressreport
almost finished the abstraction of the publisher interface... the transport specific elements are now fully separated into their own packages and can now become part of the API libraries for websocket and http.
when this is done, it will mean i can later add more, like a TCP/QUIC api, for example, that will be like the websockets but with less of the http->websocket round trip that websockets require due to the upgrade
it also means that later it will be a lot easier to add a full new event encoding, much the same process as what i'm doing now except then adding an abstraction for the encoder
a little ways to go yet, but i'm on the home straight
the main difference that it will make is that where before the event publish is going through a mutex to add a subscription, it will instead just bump a return channel and context, and when the subscription gets a CLOSE it will instead just cancel the context and the publisher will iterate the subscriptions and delete the ones that have canceled context
much greater separation of concerns, the api libraries then have the publisher interface along with the method call interface, and become a unified, consistent API that probably can even be further abstracted into an interface, meaning that they can then be interacted with from the relay side without needing to refer to implementation specifics, and it will then be what Uncle Bob would consider "clean code"
it is usually ok if you don't have any intention to add new ways of interacting between two parts of your codebase, to not make interfaces, some people, like my friend from ireland who i worked with on indra, was of the opinion to make an interface immediately so that when you do realise you need to make more, different implementations of stuff, you don't have to refactor your code to accept the new implementation, and i think he's probably right about this
i'm cleaning up and refining someone else's code, written without thought for the future, probably because he is a grant chaser and not thinking about his work becoming a commercially employed technology, which is also why the API of nostr is such a nightmare to extend
for this afternoon though, i'm getting pretty much past the point where i can see myself finishing the last piece of this to make the common abstraction and eliminate the tight coupling, but tomorrow, now that i have teased apart the pieces nicely and started sketching out the publisher interface fully, the last part should just be a few hours work
#realy #devstr #progressreport
i have now fully separated the websocket handling from the main relay code
the relay only concerns itself with saving events and by use of the magic of interfaces, the parts of the server needed by both the socket api and the http api are now passed into their packages without creating a circular dependency, and making the points of contact very clear, and very cleanly separating the two
the last step, which will require a version bump, is to now make a bunch of build tags and some stubs that noop whichever of the two apis is not included in the build, and there will be the possibility of deploying either the legacy, hybrid, or new API without cluttering your server's memory and disk space with the parts you aren't using
well, there is still going to be a map and mutex for the socket api that is shared by each instance that is created to serve a socket client, but it will only be allocated, not populated, if the socket api is not running, it has to be at the relay level because each socket shares this state in order to delete themselves from the list, for the subscription management
so, probably not really that close to finished the task, really the subscription processing should also be forked out into the socket api and http api for each of their relevant parts, and then it will be fully separated and isolated
i think i will need to push out the actual part where the subscription is sent into each of the apis fully, but the part that maintains the list of subscriptions, like the client list, kinda has to be common as the relay itself is a web server, and either sends a request to be upgraded for socket, or forwards the request for the http
all i know is i think i need to go eat some food
#realy #devstr #progressreport
not really a big thing, just been busy refactoring the HTTP API so that it's not tightly coupled with the relay server library, separating the websocket stuff from the http stuff
now at v1.14.0 to signify that it's been refactored, i unified the huma API registration into a single operations type which means i don't have to touch the server code to add new http methods
also in other news, i have found that i can't get more than one more year of realy.lol so i've migrated the repository to refer to https://realy.mleku.dev , set up the vanity redirect for the new, the old code will still find its actual hosting on github just fine but once you have it and check out the latest on the `dev` branch it will be all pointed to https://realy.mleku.dev
same name, different DNS... and i've extended the mleku.dev, which i also use for my email and some general things, to expire in 2029 so i won't have to think about that for some time
now, not sure what i should do next, i just had wanted to make the http api more tidy so making an interface and popping all the http methods into one registration makes things a lot neater
ah yes, i need to start building a test rig for this thing, and probably it will be much like mikedilger's thing but will also test the http API
so, back to the fiat mine i guess... the project i'm working on is getting close to MVP, practically there, just a bit of fighting with the openapi client code generator for new methods that are needed for the spider's efficient fetching of updated data
#realy #devstr #progressreport
i just got done adding a feature for version v1.12.0 - a live IP blocklist
it's actually a whole configuration system, but i just wanted specifically a block list that didn't require me to add more nonsense to the configuration, but also that let me explore uses for the HTTP API that you can now see at https://mleku.realy.lol/api
blocklist is just the first item, quite possibly a lot of other parts of the configuration will go into there, if they are settings that can be instantly active and don't change things in an irreversible way, like the database configuration for event storage encoding, or similar things
there will be more of these kinds of configuration elements added, for now, it is nice for the use case of blocking spiders that make pointless, retarded, repetitive requests and waste my VPS CPU, memory and bandwidth
#realy #devstr #progressreport
not a big report today, though i've been busy doing things
http api now has import/export for admin stuffs, and i just added a new feature to rescan the events to update the indexes for events, so when i add new ones, they are generated for all events again
why i did that is that i have added a new index to enable searches that just return the event ID
but i want to be able to avoid unmarshaling the events to get their IDs, so i made a new index that has the event ID
then i realised that needs the pubkey to be there too, so it can be screened of muted authors so they aren't sent pointlessly and IMO disrespectfully to users
i hadn't finished designing this index but even if i did modify it, by regenerating the index from the pre-pubkey addition i would have then enabled a new index, and indexes are kinda nice like this, the data is neutral, if you change the logic, and it needs new data in the indexes, you can just change them, they are internal private but they need to be updated when new indexes are added
anyway, this is all in aid of implementing a http filter that is basically like a regular filter but it rips out the search and ids fields because ids are a separate API, and i have to implement that too, actually, but there is an order of operations in these things
first you encode, then you decode
first you get a list of event ids
then you write the code to get the events for them back out
almost there
maybe i can finish this filter tonight
#golang #devstr this is the best #openapi tooling i have found
https://github.com/danielgtaylor/huma
my vietnamese colleague who does smart contracts and backend servers in javascript uses some whizz-bang thing that lets him define the openapi spec in native code
this is not what you expect when you first are looking at this swagger/openapi stuff, you see "needs a spec written"
the dude literally didn't even know this was a thing, how bout dat
i had to figure all of this stuff out for myself and i spent since friday trying to make one of the toolkits for it work, i tried swagger 2.0 i tried openapi 3, i tried like 4 different generators, all of them shit, except go-swagger, which locks you into an extremely limited framework that is hard to poke normal fucking things into without making it painful
huma just does the job... literally just declare the function as a closure with a struct defined in situ and voila, wham, bam, thankyou ma'am, here's your json formatted spec and web GUI available at /docs to boot
everything else has been a nightmare, i hope that i can move forward quickly with this because the people at my fiat mine think i'm incompetent even though i have literally written the almost entire underpinning of the server that i was assigned to and none of them can even understand how a recommendation engine works, and it's like, uh
:cry:
it's just a thing that mines user data and compares profiles to find similar profiles and then uses that to make evaluations or suggestions based on it
well, anyway... it's a job, and it gets a bit frustrating at times being a back end dev, i have successfully won the bounties on several grant projects last year, despite incredible obstacles (mostly nostr obstacles) so now i'm moving towards more generic stuff, and coinciding with it is that i can now apply this knowledge to finally implement the nostr http api
once they see the heroku endpoints up and running (and omg don't get me started on heroku what a shitshow) they will start to understand
what's even more annoying is that they think i'm going slow and it's taken literally 4 weeks for the front end and back end guys to add like 4 small things and they aren't in production yet so i can't even test my algorithms on live data, so i'm trapped here in disregarded land while i can see what i have done but zero others can, i'm literally the only person in my company who can write an advanced algorithm, and this what i'm doing now isn't even nearly as complex as what i have done before
the json data format shit though, i hope i'm gonna bump into some tricks to speed up how fast i adapt to their shitty lack of typing because the Go native json parser turns everything into generic interfaces and slices of interface and this means i have to write manual code to identify and unpack all of that to put it into strictly typed data structures
and they think it's slow doing 400 million comparisons, created by the (n-1)n pattern of a recommendation matrix
anyway, just venting but now time to hit this huma and regain some of my humor
a great example of the #interop failure of #nostr #devstr
i mute a user in jumble
it is still visible in coracle and nostrudel
i mute it also in coracle
it is still visible in nostrudel
i close coracle and nostrudel tabs forevermore, i'm not tolerating this pollution of my computer's memory and my screen's pixels and the waste of CPU and memory (ie, electricity bill and damage to my hardware) that this all entails
because only #jumble actually is respecting me, as a user, with the small exception that #coracle lets me totally hide any notice of the trash i muted and if anyone wants to actually interact with me, they should stop being a twat in-group brainwashed drone, once i mute you, it's pretty much gonna be forever, i'm provoked to unmute people that are in a mode of behaving civilly but honestly, i wish i wasn't, nostr:npub1syjmjy0dp62dhccq3g97fr87tngvpvzey08llyt6ul58m2zqpzps9wf6wl please enable fully hiding muted notes, instead of just wasting almost as much space on my screen and wasting my time downloading it
#realy does not send you events at all from users you muted, so long as it has your mute list to consult before returning your results
this should just be standard. no bandwidth, no compute, no memory, and no dfisplay bandwidth wasted on shit you don't care about
i just spent much of today and yesterday afternoon learning to write openapi specs, and i have built out a whole thing now, got it generated, not sure if it's exactly what i want, gorrilla/mux, but i am always hearing this name so ok trying that first
found a nice tool to serve up an embedded swagger UI through it, i was horrified to see that it was 11mb for the standalone distribution (that doesn't depend on NPM's servers) but ok, this is not unbearable, a little slow updating but i don't think i particularly care about it being more current than 5 months ago:
https://github.com/swaggest/swgui
so i will now have a generated, easy to make clients with it API interface for this new nostr HTTP interface i'm building, and people will be able to poke at the API without having to leave the comfort of their browser
also did i ever mention that i don't care to submit a PR about what i'm building? that's right, because this thing has its own documentation endpoint and every instance of #realy will be serving the docs of the version of the spec it uses
no need to wait forever for a bunch of bikeshedding to go on and someone to decide "oh nah, we don't like this" even though i've got another relay dev colleague using it and three client devs deploying it in their app
flinging cats into nests of strutting pigeons perched on chess boards they think they have had victory on is my idea of fun
#devstr #swagger #golang
#devstr #realy #progressreport
i have been in the process of building the new HTTP protocol but i wanted to first actually upgrade a production system to whatever new code i have got running to sorta make sure that it's really working, with a reasonable amount of actual users and spiders making connections to it
well, anyway, the purpose was pretty small, mainly
there is now a new index, which consists of an event ID, and the event created_at timestamp
for now, the production system has just made these for every event it has, and will generate them for every new event that comes in
but the reason for it was so that as soon as i update to the full finished MVP implementation of the protocols, that the necessary indexes are already in place
i have basically already implemented the fetch by ID endpoint and the event publish via http, the last piece is the http `/filter` endpoint, which basically provides for doing a search based on kinds, authors and tags.
the "search" field is a separate thing anyway, and is intended for full text indexes and ... well, DVMs, which are basically what i'm superseding btw
these return only the event IDs and to enable that, i needed to create a new index that stores the event ID and created_at timestamp into an index, so i can find the event by index, then use the index to find the FullIdIndex entry and then from that i can assemble a list and sort it either ascending or descending based on the timestamp in the index
without having to decode the event data, that's important, because that's an expensive operation when those two fields are all i need to get the result
and then the caller can then know, that at the moment the results were delivered, that is correct for the state, and it can then segment that list, if necessary, and then request the individual events that it actually needs, which is a big bandwidth saving as well as enabling simpler pagination by shifting the query state to the client
of course, clients can decide to update that state, and because they already have the same query's results, if they store it, they can even then also see if new event spopped up in between, as chaos tends to allow (clock skew and network latency) but the client doesn't have to have those events be thrown at it immediately as is the case with nostr standard nip-01 EVENT envelope responses on the websocket
now, they can literally just ask for a set of event IDs, and have them spewed back as line structured JSON (jsonl) and voila
far simpler to parse and understand for a humble web developer
#devstr #progressreport i made some fixes to #realy today
instead of blocking clients asking for DMs when it's not authed, it sends auth and just doesn't return them until it gets auth
in many cases, clients do that after i post a new note because the nip-11 says "restricted writes" and that makes them actually auth and once that socket be authed, i can read my messages, not that i use them, becos they don't work, nostr DMs are a joke because nostr sockets are a dumb idea and no client hardly barely implements auth properly still and the specification is as transparent as the lower atmosphere of jupiter.
anyway, so, there seems to be some improvement, jumble is now happily posting and fetching notes properly again after it was broked for a bit of today
i'm feeling quite exhaustage also
on other fronts, i am close to finishing the JWT token verification, i am just gonna say that the #golang #JWT library is horribly documented and i basically had to poke at several things before i even could read a clear statement that i needed some function to return a goddamn key
yes, that will be the one that fetches the 13004 event if it finds it, and decodes the pubkey and then uses that
just writing the test at this point, using the key directly from the generation step
compared to the btcec/decred bip-340 library, the API of the conventional "blessed" cryptographic signature algorithms are a pile of dogshit covered in shredded car tires that has been set on fire
you think there could be anything more stinky than a tire fire? yes, a dogshit-and-tire-fire... that is the state of "official" cryptography, compared to bitcoiner cryptography, which is sleek and svelte and sane
no, i'm just kidding, bitcoiner cryptography except the C library is abominable, because roasbeef is a jerk who has repeatedly stopped me contributing to either btcd or lnd because he is a smartarse jerk and you know he's a jerk because he isn't here to ask me for an apology
fuckin assclown
idk how he came to have anything to do with the invention of the lightning protocol but i'm sure as fuck certain it was only that he built LND and nothing more, because his code is awful, beyond awful, and i hate him also because he doesn't seem to care that his repos don't have working configuration systems
anyhow
probably i will finish making the JWT auth scheme validation code tomorrow and then wrap it into the http library and voila, normie clients with a token will be able to access auth-required relays without either websockets or bip-340 signatures, instead, a revocable key token bearer scheme that lets another key stand in for the auth purposes
also, did i mention that roasbeef is a dick?
i would love to have one of his fans rebuff me, that would make me feel like i actually had an interaction with him that wasn't him stopping me doing useful things for bitcoin
Showing page 1 of
13 pages