In terms of what would hold up in court, I really can't say. I don’t want to sound too relaxed, nor do I want to paint too much of a dystopian picture… so I’ll focus more on the technical feasibility of deanonymising someone.
Yes, DNS is a major one that projects like pkarr, Onion Services, etc., try to address (pkarr is still pretty new, and Tor is no silver bullet). But for personal users, there are a gazillion other layers that can expose them, from software running on their mobile or desktop (starting with keyboard apps, AI tools users have "authorised" to learn from their behaviour, to shady background daemons like Meta was using to the OSes themselves …), them there's your ISP if you're self-hosting, CAs if you are using HTTPS, CDNs, the VPS or Cloud provider provider you're renting hardware from all owning bits and pieces of personal information and metadata that can be pierced together to paint a picture. Then there are all sorts of fingerprinting techniques… and about a gazillion other possible deanonymisation vectors.
Again, I can’t say what would hold up in court, but I’d work with the assumption that, for the vast majority of people exposed to Nostr, the authorities can figure out who’s behind an npub or operating a relay fairly easily.
This is more directed to nostr:nprofile1qqswuyd9ml6qcxd92h6pleptfrcqucvvjy39vg4wx7mv9wm8kakyujgpypmhxue69uhkx6r0wf6hxtndd94k2erfd3nk2u3wvdhk6w35xs6z7qgwwaehxw309ahx7uewd3hkctcpypmhxue69uhkummnw3ezuetfde6kuer6wasku7nfvuh8xurpvdjj7a0nq40 than to you, but I’m replying here to avoid breaking the chain (and it touches on what you said anyway).
I was thinking about this tonight and, honestly, my position hasn’t moved much. DHT might be the "end game" (well, there’s no real end game, but you get me), but looking at the current state of Pubky, it feels like anything of the sort is still years away from achieving a Nostr-like multi-client, multi-intent ecosystem. That comparison isn’t entirely fair given how young Pubky is, but on the other hand, it’s not like the bar is set that high when you consider the competition from Big Tech and their unlimited engineering resources.
If we try to force DHT, fancy serialisation, complex contracts, etc. onto Nostr right now, we risk alienating the average pleb dev, someone who can maybe, kinda handle WebSockets and JSON if you give them a JavaScript library and a half decent LLM Agent, but who would seriously struggle with more complex, enterprisey architecture, nevermind R&D sruff. I’m not being dismissive of these devs by the way. I’m not the person who’s going to build the next viral Spotify alternative over Nostr, but they might be.
I'm not saying DHT R&D shouldn’t be pursued. In fact, I’m very bullish on Pubky and Pkarr. You've also announced your own greenfield R&D work, and I’m genuinely looking forward to seeing what comes of it. But if we’re being realistic, even though Nostr is small in the grand scheme of things, it’s already a large enough ecosystem to warrant a slower pace... Adopting new ideas only once they’ve matured and stabilised. I know that’s not what a lot of devs want to hear, but IMO it’s the right thing to do if we want Nostr to grow. We need to prioritise expansion over exploration.
That said, with a bit of imagination, we can keep Nostr working quite well while those newer ideas are being tested. If we move beyond the purist ideal of "one fully decentralised P2P solution that fits all," we’ll see that NIP-05/NIP-65 actually does the job fairly well (even if you, and I admit, even myself, don’t always give it enough credit). We just need to break the problem down a bit. DNS itself needs to be complemented by rDNS which, IMO, is a bit of a dirty workaround... but hey, it works (sorta). WKD works pretty well for PGP alongside traditional keyservers. Clients can and should cache NIP-05, Kind 0, Kind 10002, and other list/set data kinfs. So as long as we have a deterministic way to retrieve this info, I think it’s good enough for now.
The main short-term issue is that Gossip, as it stands, isn’t deterministic when all you know is someone’s npub, and, as Nuh puts it, it’s currently working mostly due to a set of coincidences that will fall apart as people get more serious about running their own relays.
My opinion on this is: is it really such a big deal if, for now, we have a bunch of index relays syncing with each other? Honestly, as long as index relays are lightweight and it’s easy for someone to run one and join the network (i.e. not Bluesky/AT Protocol levels of complexity), I don’t think it’s a problem.
Could it end up like HKP, where most folks rely on just 2 or 3 servers? Absolutely, in fact that’s actually very likely. But this is good enough for now. And once the experimental, fancy, quantum-resistant, DHT-based P2P solution is ready, we can build something like a gateway indexer relay to bridge the legacy world until clients are ready to migrate.
So, the TL;DR: I don’t think there’s anything wrong with playing the tortoise in the tortoise and the hare race of distributed social media protocols. The real issue we face today is that the Outbox model isn’t holding up in a world with thousands of personal relays. That’s the problem we should focus on solving now. Let other protocols run ahead with the fancy R&D stuff; we’ll learn from what they build, and later we can catch up and gradually move the majority of users to the fancier stuff. It might not be as exciting for engineers as working on greenfield projects, but it’s hard honest work, serving actual Nostr users.
Showing page 1 of
1 pages