teh most annoying thing about the #ai hype is that executives in many tech companies are drinking the kool-aid like it's heavenly ambrosia
and so they think that it changes everything about their business, and because they understand the low level and back end stuff the least, they think that what we are doing is "obsolete" or something
no, it's not, AI has changed nothing about the low level and back end because stupid language models CANNOT do rigorous logic or creative thinking that is necessary in these areas
but i'm constantly getting this "oh we are going to move you to something else because your task is obsolete" shit from my fiat mine's CTO, and frankly, although there was problems with the CTO i worked with at one shitcoin project back in 2021, i preferred having someone in charge of my work being a peer in their technical skill, he wrote an algorithm quite quickly that was needed for my job, which was a bit rude because i could have done it in maybe a day more than he did it in, but i didn't have him second-guessing the requirements, this dude's background was physics and he knew quite a bit about experimental systems that required computer assistance
my current CTO is unqualified, she can't even write code, hardly at all, so it really grates for me to have her so, to be blunt, stupidly not understanding what people are telling her
guys that do ML stuff with python vector databases are not qualified to write services code, and they understood well enough that most of what i've already done in my current work was necessary
anyhoo
just an anecdote, and it hints at the idea there will be a really big problem with code quality and reliability of services in the coming years until people wake up from their media induced AI mania
i mean, just put it this way, my task requires a matrix table, it's simpler, in some sense, than what vectordb does with ML language vector analysis, really it's 1 dimension of what they are doing, but it's gotta be done efficiently, because as the data size grows, it is exponentially more work to regenerate the data, so the data set has to be frequently updated when user input changes, and it has to efficiently avoid doing work on any data that isn't needing work
my index and database design will achieve that, and it will do it at least 5x if not 10x faster than a vector database would do, and i could increase its precision with merely adding a single function for a weighted matching on data that is free form user input, as they are only simple phrases, not whole sentences let alone paragraphs of text
fortunately i was able to get understanding to happen on this topic though, tl;dr - this is mostly machine-generated, already simple data sets (currently 8 factors, soon to be 9) and that this is not a data type that suits more advanced language comparison engines like VectorDB, and also, the main thing that they were worrying about was scalability for performance, and latency of data updates
i was able to clarify on these points and also educate that "fuzzy matching" is a baseline type of AI system based on simpler one dimensional data sets and requiring complete graph computation, and that we do the updates in the background so the fetching of recommendations is instant, in milliseconds
haha i also have to write up the comparison scheme now... i built an algorithm but hadn't writen it into human understandable form, and this will be good anyway, because i should now be able to spot any errors in the way i have constructed the comparison calculations
Showing page 1 of
1 pages