That price is very good - even better is the fact that you can actually buy these right now (ships in 2 weeks apparently) by going to their website! So many AI startups are PoA, only B2B etc.
I did some 5 second digging into their code and it seems based on their Github that they took Berkeley's BOOM CPU (written in Chisel), and added support for the Vector 1.0 extension, and then they've called this core Ocelot. Seems like the core is open source at least.
ocelot/bobcat has nothing to do with the ML cards. It was their first go of implementing the RISC-V vector extension as a proof of concept and they open sourced it. It's not very optimized and the ascalon architecture will be a lot different.
relative to what? according to this, a single card is capable of >300t/s on Mistral-7B and the workstations with 4 cards are doing nearly 500t/s on Mixtral-7Bx8.
yeah the H100 and MI300 nearly 10x those numbers [1] at the same batch sizes, but those cards are unobtainium server-class hardware priced way outside the prosumer range. while these cards cost less than a RTX 4090 and only use ~300W.
what other options exist for individuals or small companies looking to run/train locally at that kind of speed?
Yes but the benchmarks listed above are mostly done on an 8x setup at $1400 each so ca. $12k and the performance achieved is a fraction of what a $30k H100 will do.
The benchmarks on GitHub use the N300 which has 2 chips per board with 4 boards in the system -- that's the "2x4" they refer to -- with each board being 1400 USD. So that's only $5.6k to match the system they sell, versus the H100 which is north of $30k. Well, OK, at an equivalent bs=32 it's only 4x or 5x worse than the H100 according to the benchmarks in this thread (500t/s vs ~2000t/s), but as you note that's not what people practically use for batch sizes, and the power usage is a factor of 4x worse overall too. So, at an impractical batch size it only uses 4x more power for about 1/4th the total tokens/second. Given the pricing you could in theory buy 4x as many cards while still being cheaper than an H100, but that totally ignores operational and other scaling costs. Apparently Tenstorrent is still on 12nm for Wormhole, too.
Anyway, I overall agree with you it's not as amazing as people here might make it sound. I think people here are viewing it with rose tints because practically speaking nobody else actually sells B2C accelerator hardware at a reasonable cost, and has actual availability which is what they want. They look at Tenstorrent and see a "Buy now" form as a way to spend $5k USD and get 96GB of GDDR6 and a toolchain that's both open-source and not-nvidia, or whatever. This forum is going to be particularly sensitive to things like that.
The actual hardware I think still has a ways to go, but hopefully they can scale it up in a bunch of ways and people can at least buy functionally usable cards with a software stack that works on them all. So, they're doing better than a lot of competitors in those ways, I guess...
I agree. Their 70B parameter benchmarks are better than 7B. Something is wrong with their software and they need to fix it. They get 4000 token/s on Falcon 7B, which is what you should expect from processing a batch of prompts. The per user performance is atrocious and their hardware should be at least twice as fast as a moderately overclocked DDR5 system.
You can do 4k t/s at large batch on last gen consumer gear. You’re right their 70B are more competitive but still very much off the mark. If these cards cost $500 instead of $1400 then maybe it would be more compelling.
The Loudbox costs $6000 and contains 4x n300 cards at $1400 each so on paper you only pay $400 for the box with 2 Xeons, 3.8TB NVMe SSD and 512GB DDR4-3200 ECC RAM.
Some buyers of the loudbox might be tempted to sell the four n300 cards separately and keep the box, i'd say it's worth at least $2500 or so.
How fast are the n300s compared to RTX 4090 or RTX 3090 cards?
At these prices it doesn't seem worthwhile to stick them into your own workstation instead of going for theirs.
Don't beat yourself up about it. George Hotz thought NVIDIA Inception pricing for Ada RTX 6000s was $1,500, instead of $1,500 discounts, and launched a whole company based on the misunderstanding.
> According to Tenstorrent, each Tensix core features "five RISC-V baby cores," which allows scalability along with multi-chip development much more effectively.
A neat feather in RISC-V's cap. Though to be fair, RISC-V microcontrollers on PCI boards is nothing new; Nvidia's shipped their RISC-V Falcon GSP for years, and a lot of recent storage mediums rely on RISC-V microcontrollers.
Tenstorrent’s website has different prices for the AI workstations: LoudBox says $12,000, not $6,000; QuietBox says $15,000, not $1,500. Example below.
The real comparison for me is against a box using NVIDIA RTX’s. That’s what most use in the sub-$10,000 space for open-source models. They usually use $2,000-$5,000 worth of consumer GPU’s with maybe 24GB VRAM each. So, how does this compare on inference or training to 1-2 RTX’s with 24GB? And what performance for both small and large models?
I find it a bit strange that they recommend to use Ubuntu 20.04 [0], when 24.04 was recently launched ... I would have expected at least 22.04 support.
Thanks, this is a good sign. I had a bad experience with Jetson Nano to realize a few years later that it will be stuck forever to Ubuntu 20.04. I don't want to repeat the experience especially with a more expensive device.
I often think what an Amiga for today would look like, something different from the computers we usually have. One with just a GPU-like processor doing everything, exploiting massive software parallelism instead of dedicated chips would fit this description.
The way GPUs and CPUs work are so different. I’d be surprised if you could have a usable computer with just GPUs without turning it into a CPU or having it effectively be single threaded.
I did some 5 second digging into their code and it seems based on their Github that they took Berkeley's BOOM CPU (written in Chisel), and added support for the Vector 1.0 extension, and then they've called this core Ocelot. Seems like the core is open source at least.
https://github.com/tenstorrent/riscv-ocelot/blob/bobcat/READ...
Not sure about the driver/software situation though. Do you actually get to run whatever RISC-V code you want on these? Could be amazing if so.