0
0
3

been busy making this whole automatic access control work as perfectly as possible for #realy v1.2.24 now fully implements this so that any time the owners change their follow list, or any of those who they follow change *their* follow list, it schedules an access control regeneration that activates as soon as the updated follow list has been saved this ensures that when you stop following some spammer who was DMing you, they are immediately not able to any further push events to your inbox on the relay and whenever you add someone to your follow list, as an authorised user of the relay, you can immediately enable your follow to DM you in your realy inbox had to create a secondary list for the update process so it only monitors changes on follow lists of owner and follows, and then the secondary list contains all of the follows of the follows who are then allowed to publish to the relay as an example of what sort of numbers this entails, my follow list of 98 has a further 10,000 npubs automatically allowed to auth to wss://mleku.realy.lol and publish events to it the point of this is of course that if you are a hypothetical relay provider, be it volunteer, internal, or for pay, but in the case of paid, or group funded service (which can be volunteer, basically like membership dues) then you want to allow everything the user might want to see be uploaded to the relay also anyone the user follows i assert that there is a logical semantics in following that you allow them to DM you, and with this change now if you allow them to DM to your inbox on a relay, and they spam you, their are no longer on the relay access list if you unfollow them been a bit of a long day at making fixes to this, the implementation wasn't anything like sufficient this morning, i've made like 6 new commits today on it #progressreport #devstr #relay #relaydev

0
0
3

1
0
3

1
0
3

#devstr #progressreport w00t, i've now got the queries bumping their last access time counters in #realy now i can add back the garbage collector... first the counter, which collects all of the events, their data size (including indexes), and last access time, this list then has a sorter, which sorts them by their access time then the mark function, which then goes in ascending order (oldest) and selects the list of events that exceed the low water mark target (the GC triggers when the count exceeds the high water mark, and we mark the ones that would bring it to the first event under the low water mark) then finally with all of the events selected that we are going to delete, we run the "sweep" function which deletes the events that are the least recently accessed that give us our low water mark target, and voila this is IMO an essential feature to put on a database that can potentially grow very large, because there really is no other way to deal with containing the event database size to the available storage on your server without this, IMHO this is a mandatory feature for a relay, but afaik no other relay (except my buggy previous version, replicatr) actually has this... i could be wrong... probably many other database drivers have access time marking but idk if anyone has bothered to do this because the most popular relay, strfry, is used by hipster damus boi who resolves this issue by his periodic nukenings realy will have its own periodic nukening automatically so you don't have this issue of hot data going dark like a nukening causes

1
0
3

1
0
3

0
0
3

2
0
3

0
0
3

1
1
3

0
0
1

0
0
3

0
0
3

1
1
3

0
0
3

been working on new things today... one was a integer to bytes encoder and i tried my hand at a hex encoder the integer encoder was a win, 2x faster than using strings goos: linux goarch: amd64 pkg: github.com/mleku/nodl/pkg/utils/ints cpu: AMD Ryzen 5 PRO 4650G with Radeon Graphics BenchmarkByteStringToInt64/Int64AppendToByteString-12 100000000 10.45 ns/op BenchmarkByteStringToInt64/ByteStringToInt64ToByteString-12 68144872 20.47 ns/op BenchmarkByteStringToInt64/Itoa-12 41371257 28.11 ns/op BenchmarkByteStringToInt64/ItoaAtoi-12 33901826 35.95 ns/op the hex encoder, however, has already been made bleeding fast goos: linux goarch: amd64 pkg: github.com/mleku/nodl/pkg/utils/hex cpu: AMD Ryzen 5 PRO 4650G with Radeon Graphics BenchmarkAppendHexToByteString/AppendHexToByteString-12 2508732 481.0 ns/op BenchmarkAppendHexToByteString/AppendHexToByteStringToHex-12 1000000 1007 ns/op BenchmarkAppendHexToByteString/hex.AppendEncode-12 11465517 106.0 ns/op BenchmarkAppendHexToByteString/hex.AppendEncodeDecode-12 7106614 169.5 ns/op my idea of doing the calculation with a little arithmetic turned out to be 10x slower than the stdlib but in any case, i've now got the fastest possible number encoder for the datestamps and kind numbers in tags, and i know that hex.AppendEncode and hex.Decode are fine now to move on to implementing the elements and putting together the new event and filter formats, binary and json encoding my previous json encoder ended up using reflect which is why it was slow, so hopefully it'll be nice and fast this time #devstr #benchmarks #golang

1
0
3

1
0
3

0
0
3

1
0
3

1
0
3

Showing page 1 of 11 pages