The content of this piece doesn't really match the title. It's about running LLMs in containers using RamaLama and Ollama, written by Red Hat's Scott McCarty, "senior principal product manager for RHEL Server".
Agreed, is he saying that because there are local containerized runtimes the backlash can be turned into local discovery instead of external API calls to proprietary systems?
AI is not a hype. It's been steadily growing.
What is a hype are LLMs. And ChatGPT I'm particular. When the masses realised they could talk to an AI and it responded like a human they ascribed all sorts of human traits to it. Which is wrong, it's just a statistical analysis.
AI can get more intelligent but LLMs aren't the way. Even OpenAI states this in their roadmap to general AI. But the market craze has led them to push LLMs more.
The AI hype bros on social media are the actual worst and have done more damage than anything else to how I feel about AI these days.
The grift has been at unbearable levels for months now and it actually drove me to delete my X account recently.
Yep. Everything is exactly two models away. It's the new fusion. Yet after $X billion expenditure I still have to argue with the finest turd squeezed out of the industry over simple things that a trained monkey could work out.
The whole thing is propped up on faith, hype and investment capital. Now I work in the latter and we're not impressed.
Edit: As for X, removing it from my existence was an improvement in mental health for sure.
Well, deleting the twitverse is an upside to your life, completely outside the whole imaginary intelligence topic.
Nice try, dear author. Alternatively, please consider that automation is what actually empowers technological innovations of the last decades, that the entire AI path is a red herring; a wasteful nondeterministic red herring, a mistake.
Perhaps THAT is why we roll our eyes, not our impatience at how we can obtain the alleged benefits of the application of such monstrous poo pipelines.
My opinion, of course, but i will never make peace with what I regard as a bad idea, fundamentally speaking.
First off, even calling this tech "AI", instead of the much more precise and accurate "LLM" is a major indicator that most of the writing on the subject is hype. Clearly this language is indicating that someone is trying to sell something, not that someone is trying to accurately convey information.
Second, so far in my (very limited) experience, it sucks. I tried an assisted search tool the other day to search for "Do any EV charging stations offer 220v AC as an output?". The results had nothing to do with that input as a structured sentence. It was still just a bunch of links to EV charging articles, which were mostly shit in and of themselves. So the results list sucked, and the individual articles linked to, also sucked (which admittedly is not the fault of the sucky search algorithm).
This seemed like a prime example of where an LLM could shine: properly interpreting the grammar of my inquiry question, to find results specifically related to that question. But it just didn't work.
I don't think this will be an impediment at all to ownership eliminating labor jobs to be replaced by bots. Mostly because most modern ownership policy has very little concern of whether their service sucks or not.
[deleted]