well I pay for 10tb out...
but no if you do the rough figuring it kinda works like this:
]
- user uploads image.
-10-30 relays pull that image and store it.
so each jpg or png stored is roughly going to inflate 1,000-3,000%.
if you start running imagemagick if you aren't doing that already on your front-end, then that would be a great help.
but overall, the protocol is broken in the way it's designed,
and what will happen eventually is the only people that will be able to actually run relays successfully when nostr scales are literally big data operations with millions of dollars...
i am almost 95% certain this is inherently by design.
there's actually nothing great about the "decentralized architecture," as it is currently posited.
there goal of "publish to the web" in a social sense isn't really attainable, there is no logical reason the entire internet needs a permanent record of what someone publishes. ever.
the http protocol did this already fairly well, but then the last 15 years of "smart" encroachment and lack of technical folks making decent tools for people to create basic websites, made this all fall flat.
we need another thing like geocities, imho.
it's a much better way to share things than in real time and fragmented attentionspans.
--- fragmented attention sharing focuses on risk/reward/ego stroking, versus supplication and sharing of personal identity, goals, morality and ethics, and is short lived, and highly temporal.
anyway just some thoughts..
okay so i have a blog right? sometimes when i post a link on nostr for a blog, i get over 1,300 hits to that article. i don't know if it's pulling all the media ( could look at apache logs but havent gotten that drill down just yet)
>From: cd9ea7c...<-8eb09... at 03/26/23 23:15:02 on wss://relay.damus.io
>---------------
>Can you explain why each image has to be downloaded by 10-30 relays per user?
>
((without getting too complicated, it doesn't mean that, necessarily.)) not sure how many relays most people use.
>As far as I understand, the image url is referenced within an event. That does not necessarly mean a relay would have to fetch it right?
>
(( referenced within an event, yes. and correct, the relay doesn't necessarily have to fetch the jpg. )) but the moment someone on global or that relay OR that persons followers list, fetches the event, that jpg is suddenly being pulled by from the media storage server whereever that is, and pulled by the users accessing it via that event on that relay, if this makes sense.
>Since the user first fetches the events from relays, which then make a separate request from the client to the image server.
>
>If for any reason all relays would need to fetch images (let´s say to verfiy the image content with a hashsum or whatever) this also may be solved in another way.
>
((this is interesting and sounds convoluted, but i'm curious what your proposing here, and how it would be beneficial..))
>Also another thought that came to my mind: To serve images that would cause high traffic, lets assume we know the user posting it has a lot of follwers: Image is uploaded to imager server. The relay recognizes this image as a very important image (VII) and publishes a specific event which will list a clients for that image that have downloaded this image within the last 20 minutes. Using this event, clients could request the image from other clients instead of getting it from the image server. It is like a client-only peer-to-peer image CDN. I am not sure though how this could be implemented so that clients could be found. Maybe with ephemeral events and web workers.
(( this is actually very interesting. )) I like this alot.
what I've yet to see is some sort of procedural regularity with the relays that i'm using (about 40 of them), and wherein I can predict a pattern therein that the above would actually be applicable in a relevant way,}{ without turning into the dreaded "this torrent started but has no seeds, scenario."
furthermore, on your last point: that would mean that other clients also have knowledge of an image uploaded, and where it is, making clients more a search engine back end, (but then how would they serve and validate metadata requests for an image or media resource, or mp4 for example?) the discovery portion of this is a bit vexing...
I wonder what [email protected] thinks of all this.
all of you are welcome to come discuss this on my irc server, if you'd like. there are already a few tech folks that idle there if you need some extra brains to pick.
2x2chat.com #2x2
Showing page 1 of
1 pages