Just to say, I wasted a full 24 hours trying to get a local version of OpenClaw working. Total waste of time! Even if you spend 20 grand on hardware you'll never be able to run anything locally that touches tools likes of Chat GPT, Gemini etc. There is no prospect whatsoever of consumer hardware ever catching up with the big data centers. A friend of mine just spent over 4 grand on an Nvidia Spark. It's scrap metal from day one!
Alas, we are trapped in a token economy indefinitely. I'm slightly gutted.
Many Bothans died to bring us this information.
Local Model Shock
11 days ago
11 days ago
#1
10 days ago
#2
Which models did you try to run? :)
qwen2.5-coder:7b works decently well in ollama for me with i7-6700K + 32 GB RAM + RX 7600 which is quite an old build.
qwen2.5-coder:7b works decently well in ollama for me with i7-6700K + 32 GB RAM + RX 7600 which is quite an old build.
10 days ago
#3
I tried several. I started on Friday night. I'm now speaking to you on Sunday night. This has taken up my whole weekend.
Here's the vibe:
The people who are doing demos of this kind of thing on YouTube are showing trivial, non-CPU-intensive demos. Things like, "What's the weather like in ____?" Or, "Here's some items in my fridge... can you give me a recipe?"
All of that stuff makes for a great YouTube video, but it's all smoke and mirrors.
In order to do something meaningful, you need the good models. That means getting on board with the likes of Claude, Gemini, ChatGPT, DeepSeek, and other heavy hitters. Anything less is useless for coding.
Here's the analogy:
The bleeding-edge AI tools still aren't as good as a well-trained developer. However, they are sort of like a 19-year-old who has left school and has taken up coding. A bit over-enthusiastic at times. Can make mistakes, but can also be useful if managed properly.
The local models, on the other hand, are like having a 12-year-old with some kind of neurodevelopmental disorder. I wish that person well, but you're not making money with somebody like that on your team.
It's brutal, but I'm just being honest. Running a local model is a complete waste of time. I don't care how rich you are or what gear you use. You will not beat the system!
I did a very deep dive into this and we are 7 to 12 years away from consumer-grade computers being able to operate at the same level as a modern data center. However, even if you wait that long, by the time we get there, the data centers will be way ahead.
So, very sadly, it's a gap that will likely never be closed.
Personally speaking, I've had massive success with "Grady" (which uses an external API). Unfortunately, my experience with local models has been a miserable failure. I'm telling you this to save you a lot of time and a lot of heartache.
I wish I had better news.
Here's the vibe:
The people who are doing demos of this kind of thing on YouTube are showing trivial, non-CPU-intensive demos. Things like, "What's the weather like in ____?" Or, "Here's some items in my fridge... can you give me a recipe?"
All of that stuff makes for a great YouTube video, but it's all smoke and mirrors.
In order to do something meaningful, you need the good models. That means getting on board with the likes of Claude, Gemini, ChatGPT, DeepSeek, and other heavy hitters. Anything less is useless for coding.
Here's the analogy:
The bleeding-edge AI tools still aren't as good as a well-trained developer. However, they are sort of like a 19-year-old who has left school and has taken up coding. A bit over-enthusiastic at times. Can make mistakes, but can also be useful if managed properly.
The local models, on the other hand, are like having a 12-year-old with some kind of neurodevelopmental disorder. I wish that person well, but you're not making money with somebody like that on your team.
It's brutal, but I'm just being honest. Running a local model is a complete waste of time. I don't care how rich you are or what gear you use. You will not beat the system!
I did a very deep dive into this and we are 7 to 12 years away from consumer-grade computers being able to operate at the same level as a modern data center. However, even if you wait that long, by the time we get there, the data centers will be way ahead.
So, very sadly, it's a gap that will likely never be closed.
Personally speaking, I've had massive success with "Grady" (which uses an external API). Unfortunately, my experience with local models has been a miserable failure. I'm telling you this to save you a lot of time and a lot of heartache.
I wish I had better news.