been busy making this whole automatic access control work as perfectly as possible for #realy
v1.2.24 now fully implements this so that any time the owners change their follow list, or any of those who they follow change *their* follow list, it schedules an access control regeneration that activates as soon as the updated follow list has been saved
this ensures that when you stop following some spammer who was DMing you, they are immediately not able to any further push events to your inbox on the relay
and whenever you add someone to your follow list, as an authorised user of the relay, you can immediately enable your follow to DM you in your realy inbox
had to create a secondary list for the update process so it only monitors changes on follow lists of owner and follows, and then the secondary list contains all of the follows of the follows who are then allowed to publish to the relay
as an example of what sort of numbers this entails, my follow list of 98 has a further 10,000 npubs automatically allowed to auth to wss://mleku.realy.lol and publish events to it
the point of this is of course that if you are a hypothetical relay provider, be it volunteer, internal, or for pay, but in the case of paid, or group funded service (which can be volunteer, basically like membership dues) then you want to allow everything the user might want to see be uploaded to the relay
also anyone the user follows i assert that there is a logical semantics in following that you allow them to DM you, and with this change now if you allow them to DM to your inbox on a relay, and they spam you, their are no longer on the relay access list if you unfollow them
been a bit of a long day at making fixes to this, the implementation wasn't anything like sufficient this morning, i've made like 6 new commits today on it
#progressreport #devstr #relay #relaydev
#devstr #progressreport w00t, i've now got the queries bumping their last access time counters in #realy
now i can add back the garbage collector... first the counter, which collects all of the events, their data size (including indexes), and last access time, this list then has a sorter, which sorts them by their access time
then the mark function, which then goes in ascending order (oldest) and selects the list of events that exceed the low water mark target (the GC triggers when the count exceeds the high water mark, and we mark the ones that would bring it to the first event under the low water mark)
then finally with all of the events selected that we are going to delete, we run the "sweep" function which deletes the events that are the least recently accessed that give us our low water mark target, and voila
this is IMO an essential feature to put on a database that can potentially grow very large, because there really is no other way to deal with containing the event database size to the available storage on your server without this, IMHO this is a mandatory feature for a relay, but afaik no other relay (except my buggy previous version, replicatr) actually has this...
i could be wrong...
probably many other database drivers have access time marking but idk if anyone has bothered to do this because the most popular relay, strfry, is used by hipster damus boi who resolves this issue by his periodic nukenings
realy will have its own periodic nukening automatically so you don't have this issue of hot data going dark like a nukening causes
#realy #progressreport
it appears that i have finally squashed all the most salient causes of forever loops in my event query code
i have also implemented a separate count function that avoids decoding events if it can, so it performs better, it returns an approximate flag if it finds replaceable events because it is planned to not actually delete them, but i haven't written the post-processing in the query to enable this or removed the delete from replacement on event save (coming soon, on request from my shadowy super-nostr-dev friend who needs this for his work)
amethyst's endless stream of replaceable events helped expose more of the issues in the query and save for replaceable events, which was an edge case that neither nostrudel nor coracle triggered, this also now does use a bit of memory when it happens but it's quickly freed within the following minute (on more constrained hardware with less memory - eg 1gb - this might cause a brief spike of CPU usage to manage the more aggressive garbage collection - nothing i can really do about that...)
realy now uses 1-2gb of memory most of the time now, closer to 1gb
i think it's getting very close to release candidate, which will be a minor version bump to v1.1.x
there has been quite a few breaking changes, but i don't think anyone is importing it yet so 😅 whew
when it's bumped to v1.1.x i'm going to probably deploy it remotely on my VPS server
ah yes, i have other work to do today but the other major features that will be part of the v1.1.x will be a working garbage collector, a framework for shared/remote second layer event stores (even algaefs or blossom potentially could be implemented) and i also want to get around to making a badger database driver for btcd... too many things to do
better check my calendar and make sure i'm not slacking on someone haha
Showing page 1 of
1 pages