#realy #devstr #progressreport after some fighting with the huma api to play nice with my easy-registering router - mainly was just realising that it should match on path prefixes not the whole path for the huma servemux, i finally had the HTTP API presenting first things first, i wanted to move more stuff into dynamic configuration stored in the database, and first thing to do is enable configuration of the admins in order to restrict access to the admin parts of the API i haven't fully finished it yet but it starts up wide open now, and you have to use the /configuration/set endpoint and drop at least one npub into the Admins field, and voila, it is now locked down i have to first start by adding just one configuration to the environment, which is the initial password, which you put in the Authorization header field to allow access, this ensures that the relay is not open to the world should it be deployed on some far distant VPS that can be spidered by nasty people who might figure out quickly how to break it on you once those two pieces are in place, i need to put back the nip-98 expiration variant generator tool, and then you can use this token temporarily to auth as an administrator, and tinker with the other admin functions, but mainly the configuration is what is most important priority so, a nice afternoon's work, dragging a bit into the evening, but i got my nice router library working with huma API and based on the code from the original realy i will reinstate the whole functionality of it pretty quickly... and likely along the way i will probably find something to make a bit better, but i think overall it's fine as it is, it's just a bit clunky to use the export function in the scalar REST docs UI but with my nip-98 capable curl tool, nurl, you can just use that and basta now is time for bed tho #gn

0
0
3

#realy #devstr #progressreport today mostly just finishing up polishing the rewrite, but this evening i wrote myself a nice HTTP path/header router and completed the publisher router i'm aiming to make this thing as lean and modular as possible, so now i can make packages that do http handling (eg like the nostr nip-01 websocket API, or relay information page, or whatever) that lets them self-register themselves and disable registering them, optionally, by using a build tag on the source file containing the implementation most likely i will roll that all into one base directory with a single file and build tag that pulls in the given type and registers it, and then the main relay will only call the router, which only will see the packages that got included by not being excluded from the build learning a lot more about how to write http stuff in #golang - man, http routers are unbelievably retardedly simple, i just wrote a router, HBU? lol no idea how much cruft i'm gonna be able to shred from the main realy codebase just yet, but this router i just made i know it shreds a big chunk of spaghetti and lets me isolate the parts cleanly... APIs will have their own handlers, and their own publisher, all linked to a main publisher, and then if the api is disabled, it just isn't used, and voila, legacy nostr, hybrid legacy/http or new-wave http i also don't know how long this is going to take but it feels really good just sitting quietly and calmly and building this or maybe this is #ragecoding because when you are rage coding, actually you are in such a state of mind that you could cut shit like your eyes are frickin lasers

0
0
3

#devstr #lifehack i have finally found a way to have the tilix paning terminal emulator fully let me have log code locations in my logs and set them up so that i can click on the relative file path, so long as i am in the directory of teh source code (even if running it remotely) and jump straight to the line of code where the log was printed from: create a script called `setcurrent` in your PATH ( eg ~/.local/bin/setcurrent ) #!/usr/bin/bash echo $(pwd) > ~/.current set the following environment variable in your ~/.bashrc PROMPT_COMMAND='setcurrent' using the following regular expressions, replacing the path as necessary, and setting perhaps a different program than ide (this is for goland, i use an alias to the binary) ^((([a-zA-Z@0-9-_.]+/)+([a-zA-Z@0-9-_.]+)):([0-9]+)) ide --line $5 $(cat /home/mleku/.current)/$2 [ ]((([a-zA-Z@0-9-_./]+)+([a-zA-Z@0-9-_.]+)):([0-9]+)) ide --line $5 $(cat /home/mleku/.current)/$2 ([/](([a-zA-Z@0-9-_.]+/)+([a-zA-Z@0-9-_.]+)):([0-9]+)) ide --line $5 /$2 and so long as you use this with an app containing /lol/log.go as this one is, this finds that path and trims it off from the log line locations and in tilix you can click on the file locations that are relative to the CWD where you are running the relay from. if this is a remote machine, just go to the location where your source code is to make it work. i put this note above in the source code of my logger, i have got fed up with the complexity of #realy and am rebuilding it from scratch... of course not from scratch, but copypasta from one side to the other, but building the frame of it carefully first so it's crispy clean

1
0
3

0
0
3

#realy #devstr #progressreport almost finished the abstraction of the publisher interface... the transport specific elements are now fully separated into their own packages and can now become part of the API libraries for websocket and http. when this is done, it will mean i can later add more, like a TCP/QUIC api, for example, that will be like the websockets but with less of the http->websocket round trip that websockets require due to the upgrade it also means that later it will be a lot easier to add a full new event encoding, much the same process as what i'm doing now except then adding an abstraction for the encoder a little ways to go yet, but i'm on the home straight the main difference that it will make is that where before the event publish is going through a mutex to add a subscription, it will instead just bump a return channel and context, and when the subscription gets a CLOSE it will instead just cancel the context and the publisher will iterate the subscriptions and delete the ones that have canceled context much greater separation of concerns, the api libraries then have the publisher interface along with the method call interface, and become a unified, consistent API that probably can even be further abstracted into an interface, meaning that they can then be interacted with from the relay side without needing to refer to implementation specifics, and it will then be what Uncle Bob would consider "clean code" it is usually ok if you don't have any intention to add new ways of interacting between two parts of your codebase, to not make interfaces, some people, like my friend from ireland who i worked with on indra, was of the opinion to make an interface immediately so that when you do realise you need to make more, different implementations of stuff, you don't have to refactor your code to accept the new implementation, and i think he's probably right about this i'm cleaning up and refining someone else's code, written without thought for the future, probably because he is a grant chaser and not thinking about his work becoming a commercially employed technology, which is also why the API of nostr is such a nightmare to extend for this afternoon though, i'm getting pretty much past the point where i can see myself finishing the last piece of this to make the common abstraction and eliminate the tight coupling, but tomorrow, now that i have teased apart the pieces nicely and started sketching out the publisher interface fully, the last part should just be a few hours work

1
0
3

#realy #devstr #progressreport i have now fully separated the websocket handling from the main relay code the relay only concerns itself with saving events and by use of the magic of interfaces, the parts of the server needed by both the socket api and the http api are now passed into their packages without creating a circular dependency, and making the points of contact very clear, and very cleanly separating the two the last step, which will require a version bump, is to now make a bunch of build tags and some stubs that noop whichever of the two apis is not included in the build, and there will be the possibility of deploying either the legacy, hybrid, or new API without cluttering your server's memory and disk space with the parts you aren't using well, there is still going to be a map and mutex for the socket api that is shared by each instance that is created to serve a socket client, but it will only be allocated, not populated, if the socket api is not running, it has to be at the relay level because each socket shares this state in order to delete themselves from the list, for the subscription management so, probably not really that close to finished the task, really the subscription processing should also be forked out into the socket api and http api for each of their relevant parts, and then it will be fully separated and isolated i think i will need to push out the actual part where the subscription is sent into each of the apis fully, but the part that maintains the list of subscriptions, like the client list, kinda has to be common as the relay itself is a web server, and either sends a request to be upgraded for socket, or forwards the request for the http all i know is i think i need to go eat some food

1
0
3

#realy #devstr #progressreport not really a big thing, just been busy refactoring the HTTP API so that it's not tightly coupled with the relay server library, separating the websocket stuff from the http stuff now at v1.14.0 to signify that it's been refactored, i unified the huma API registration into a single operations type which means i don't have to touch the server code to add new http methods also in other news, i have found that i can't get more than one more year of realy.lol so i've migrated the repository to refer to https://realy.mleku.dev , set up the vanity redirect for the new, the old code will still find its actual hosting on github just fine but once you have it and check out the latest on the `dev` branch it will be all pointed to https://realy.mleku.dev same name, different DNS... and i've extended the mleku.dev, which i also use for my email and some general things, to expire in 2029 so i won't have to think about that for some time now, not sure what i should do next, i just had wanted to make the http api more tidy so making an interface and popping all the http methods into one registration makes things a lot neater ah yes, i need to start building a test rig for this thing, and probably it will be much like mikedilger's thing but will also test the http API so, back to the fiat mine i guess... the project i'm working on is getting close to MVP, practically there, just a bit of fighting with the openapi client code generator for new methods that are needed for the spider's efficient fetching of updated data

1
0
3

0
0
3

0
0
3

1
0
3

1
1
3

0
0
3

0
1
3

0
0
3

#golang #devstr this is the best #openapi tooling i have found https://github.com/danielgtaylor/huma my vietnamese colleague who does smart contracts and backend servers in javascript uses some whizz-bang thing that lets him define the openapi spec in native code this is not what you expect when you first are looking at this swagger/openapi stuff, you see "needs a spec written" the dude literally didn't even know this was a thing, how bout dat i had to figure all of this stuff out for myself and i spent since friday trying to make one of the toolkits for it work, i tried swagger 2.0 i tried openapi 3, i tried like 4 different generators, all of them shit, except go-swagger, which locks you into an extremely limited framework that is hard to poke normal fucking things into without making it painful huma just does the job... literally just declare the function as a closure with a struct defined in situ and voila, wham, bam, thankyou ma'am, here's your json formatted spec and web GUI available at /docs to boot everything else has been a nightmare, i hope that i can move forward quickly with this because the people at my fiat mine think i'm incompetent even though i have literally written the almost entire underpinning of the server that i was assigned to and none of them can even understand how a recommendation engine works, and it's like, uh :cry: it's just a thing that mines user data and compares profiles to find similar profiles and then uses that to make evaluations or suggestions based on it well, anyway... it's a job, and it gets a bit frustrating at times being a back end dev, i have successfully won the bounties on several grant projects last year, despite incredible obstacles (mostly nostr obstacles) so now i'm moving towards more generic stuff, and coinciding with it is that i can now apply this knowledge to finally implement the nostr http api once they see the heroku endpoints up and running (and omg don't get me started on heroku what a shitshow) they will start to understand what's even more annoying is that they think i'm going slow and it's taken literally 4 weeks for the front end and back end guys to add like 4 small things and they aren't in production yet so i can't even test my algorithms on live data, so i'm trapped here in disregarded land while i can see what i have done but zero others can, i'm literally the only person in my company who can write an advanced algorithm, and this what i'm doing now isn't even nearly as complex as what i have done before the json data format shit though, i hope i'm gonna bump into some tricks to speed up how fast i adapt to their shitty lack of typing because the Go native json parser turns everything into generic interfaces and slices of interface and this means i have to write manual code to identify and unpack all of that to put it into strictly typed data structures and they think it's slow doing 400 million comparisons, created by the (n-1)n pattern of a recommendation matrix anyway, just venting but now time to hit this huma and regain some of my humor

0
0
3

0
0
3

0
0
3

#devstr #realy #progressreport i have been in the process of building the new HTTP protocol but i wanted to first actually upgrade a production system to whatever new code i have got running to sorta make sure that it's really working, with a reasonable amount of actual users and spiders making connections to it well, anyway, the purpose was pretty small, mainly there is now a new index, which consists of an event ID, and the event created_at timestamp for now, the production system has just made these for every event it has, and will generate them for every new event that comes in but the reason for it was so that as soon as i update to the full finished MVP implementation of the protocols, that the necessary indexes are already in place i have basically already implemented the fetch by ID endpoint and the event publish via http, the last piece is the http `/filter` endpoint, which basically provides for doing a search based on kinds, authors and tags. the "search" field is a separate thing anyway, and is intended for full text indexes and ... well, DVMs, which are basically what i'm superseding btw these return only the event IDs and to enable that, i needed to create a new index that stores the event ID and created_at timestamp into an index, so i can find the event by index, then use the index to find the FullIdIndex entry and then from that i can assemble a list and sort it either ascending or descending based on the timestamp in the index without having to decode the event data, that's important, because that's an expensive operation when those two fields are all i need to get the result and then the caller can then know, that at the moment the results were delivered, that is correct for the state, and it can then segment that list, if necessary, and then request the individual events that it actually needs, which is a big bandwidth saving as well as enabling simpler pagination by shifting the query state to the client of course, clients can decide to update that state, and because they already have the same query's results, if they store it, they can even then also see if new event spopped up in between, as chaos tends to allow (clock skew and network latency) but the client doesn't have to have those events be thrown at it immediately as is the case with nostr standard nip-01 EVENT envelope responses on the websocket now, they can literally just ask for a set of event IDs, and have them spewed back as line structured JSON (jsonl) and voila far simpler to parse and understand for a humble web developer

1
0
3

0
0
3

#devstr #progressreport i made some fixes to #realy today instead of blocking clients asking for DMs when it's not authed, it sends auth and just doesn't return them until it gets auth in many cases, clients do that after i post a new note because the nip-11 says "restricted writes" and that makes them actually auth and once that socket be authed, i can read my messages, not that i use them, becos they don't work, nostr DMs are a joke because nostr sockets are a dumb idea and no client hardly barely implements auth properly still and the specification is as transparent as the lower atmosphere of jupiter. anyway, so, there seems to be some improvement, jumble is now happily posting and fetching notes properly again after it was broked for a bit of today i'm feeling quite exhaustage also on other fronts, i am close to finishing the JWT token verification, i am just gonna say that the #golang #JWT library is horribly documented and i basically had to poke at several things before i even could read a clear statement that i needed some function to return a goddamn key yes, that will be the one that fetches the 13004 event if it finds it, and decodes the pubkey and then uses that just writing the test at this point, using the key directly from the generation step compared to the btcec/decred bip-340 library, the API of the conventional "blessed" cryptographic signature algorithms are a pile of dogshit covered in shredded car tires that has been set on fire you think there could be anything more stinky than a tire fire? yes, a dogshit-and-tire-fire... that is the state of "official" cryptography, compared to bitcoiner cryptography, which is sleek and svelte and sane no, i'm just kidding, bitcoiner cryptography except the C library is abominable, because roasbeef is a jerk who has repeatedly stopped me contributing to either btcd or lnd because he is a smartarse jerk and you know he's a jerk because he isn't here to ask me for an apology fuckin assclown idk how he came to have anything to do with the invention of the lightning protocol but i'm sure as fuck certain it was only that he built LND and nothing more, because his code is awful, beyond awful, and i hate him also because he doesn't seem to care that his repos don't have working configuration systems anyhow probably i will finish making the JWT auth scheme validation code tomorrow and then wrap it into the http library and voila, normie clients with a token will be able to access auth-required relays without either websockets or bip-340 signatures, instead, a revocable key token bearer scheme that lets another key stand in for the auth purposes also, did i mention that roasbeef is a dick? i would love to have one of his fans rebuff me, that would make me feel like i actually had an interaction with him that wasn't him stopping me doing useful things for bitcoin

0
0
3

Showing page 1 of 13 pages