Yeah, I'm designing the component to work well, with a smart relay, or a dumb one.
Most info is found fastest algorithmically, as the search terms are clear, the context/environment is narrow, and the data set is preloaded. I don't need an LLM to find the book "Jane Eyre", in a card-catalog on my machine, where some have the title of "Jane Eyre". As soon as I start typing "j-a-n..." it should just appear.
Smart search will be more interesting, for the wiki page, where the user is looking for information on or near a particular topic, so that semantic search returns *related* results, ordered by relevance. But, even there, you don't have to yap at it for 3 minutes, to find what you need, because we have narrowed the scope through the mere existence of the entry point on a particular, structured page, and we have designed a form for the results to be poured-into. It doesn't need to vomit out an essay.
I guess I'm a traditionalist.
I've noticed that people are abandoning software calculators and asking LLMs to do basic arithmetic for them, now.
ChatGPT, what is 14/2?
They like having one, single, text-based interface, and they just trust the AI to always return precise, accurate, objective, answers.
Which I find incredibly ironic, as they also usually refuse to use anything command-line based, preferring GUIs. They want a GUI that is just one prompt bar, where they can just type "?" and the chatbot spits out the timetables for the Underground on a Wednesday at 21:00.
Well, we'll actually be displaying LLM-generated background info and summaries, and stuff, but it won't be a chatbot. It'll just be fields in the card at the top. (If you click the big white square, it opens the details modal, which will be getting more details about the author, the genre, the historical period, the particular edition or translator, etc.)
Everyone wants to see the same sorts of things, so there's no point expecting them to actively ask for it. Just add a field to the GUI and display stuff they are likely to find interesting.
I know people have been staring at us, wondering why the only team chock full of mathematicians, computer scientists, electrical engineers, cognitive scientists, physicists, and data analysts, is the only project that isn't "doing AI stuff".
We _are_ doing it, but we're spending more time thinking about how to do it _well_, and practicing different techniques, because it needs to be useful going forward, or we're wasting our time.
the main reason why i have something to contribute is because i'm working intensively on my pet project, https://x.realy.lol to do a really fast, comprehensive search engine including standard filters and full text indexes, and in the process as i elaborate the ideas i get from what i learn about how they work, realising exactly how AI assistants on search engines work.
i think the "replacing humans" chatter is way overstated, because really this is just like any tool that increases productivity, it reduces the manpower needed to do a given task. all technology ultimately displaces human labor and that's the reason why it's invented. i think that some people, who are misanthropic and cruel minded, actually hate all people and fantasize about living in a world where everyone says yes to them instead of pointing out valid points of disagreement, or even, getting so mad at the actions being taken that they take up arms. these people increasingly use soft power, psychological manipulation, but then when that fails they bring out their hired guns (and in the future that will be drones too).
best to focus on how it empowers us, because we can also defend ourselves with soft power, and that's really central to the whole mission of bitcoin and nostr, in my opinion.
Showing page 1 of
1 pages