Hacker Newsnew | past | comments | ask | show | jobs | submit | iron2disulfide's commentslogin

I find your company fascinating, having also worked on chips (and chip dev tooling) for much of my career.

> But they want to work on chips, not devtools!

I have long had a gut feeling that there's an entire industry of frustrating tools specifically to keep that industry alive. I once was shocked to learn that my company had bought licenses for a tool specifically to combine multiple IP-XACT specs into one... basically just parsing several XML files and combining their data! Outrageous.

RE orchestration: It's easy-ish now since it sounds like you're starting out with (free) open source tools, but once you start looking at things like license fair-share, you might find yourself starting to build yet-another-Slurm/LSF.

Any reason for buck2 vs bazel? Bazel seems more active (O(thousands) questions on StackOverflow for Bazel vs O(hundreds) for buck).


Yeah, you make some good points, orchestration has been historically painful -- we've personally seen the headaches that come with scheduling on slurm and lsf; I'd guess some of the most thorough bikeshedding in history has been around tinkering with slurm's multifactor scheduling logic. We're trying to not to re-invent the wheel with orchestration, and we're in the midst of building interfaces to hook into slurm, instead of replacing it entirely :)

As for buck2, we decided to go with it for a few reasons:

More forgiving with gradual adoption, from our experience -- running non-sandboxed actions in bazel is a pain, buck2 has been much easier to plug into existing flows.

Buck2 installation is easier, and by extension, is simpler to embed into our test runner.

Respectfully, bazel's implementation is a monolith beyond comprehension -- if we want to modify buck2, and package our own fork, we have confidence that we could do that.


I started my career off in the chip design world, where 100% line, branch, FSM transition, functional group, and toggle (i.e. individual bits in a bus switching 0 -> 1 _and_ 1 -> 0) was "table stakes". There are a lot of comments in here saying that achieving 100% coverage would be expensive - and they're right. The majority of headcount in modern chip design houses is taken up by verification engineers, whose sole job is to architect, implement and maintain a minimal test corpus that achieves that high bar. The cost of failure is simply too high to omit this step.

It was unsettling to me after moving to a SWE job where coverage was kind of... not as important.


I can think of certain types of software that are so critical that they must be 100% thoroughly verified to be logically correct, e.g. software in surgical machines, airplanes, or high-frequency trading. The vast majority of applications, though, would survive with less coverage, though I still think that the sweet spot is somewhere between 70-80%.


The functionality of the F-91W is simple enough that I don't think a CPU would even be needed. Probably the digital parts of this chip are just state machines. That being said, the left half of the die shot looks like some kind of gigantic ROM, which could either be used by a CPU (as static program memory) or just transition logic/data for any generic state machine(s).

Verilog came out in 1984, but its use for synthesis (i.e. actually compiling text into circuits) was not popularized until much later, after correctness bugs in synthesizers and various other advancements in design tooling came around. It might have been used as a simulation/verification language for the digital portions of this chip.


vhdl started as a project to document integrated circuits. The department of defense was getting a lot, and more complicated, integrated circuits and wanted a standard to document the functionality. At some point someone thought, "you know, if the documentation is good enough we could reverse it and synthesize a circuit from it", and thus why you use vhdl(or more likely verilog) to program your fpga.

The two languages fill the same role in the ecosystem, I have to say that I have never used ether, but my impression is that vhdl has clearer syntax(if you can stomach it's ada look and feel) and verilog has better tooling. which makes sense considering that one was a documentation project and the other was an internal tool for simulation that escaped into the wild.


The "Silicon" part of Silicon Valley is still very much relevant. Intel, AMD, nVidia, etc. all have major presences down there. There's also a multi-billion dollar industry supporting silicon design shops in various ways, and they're all in SV too.


Gate-level simulation (even zero-delay, let's not mention delay-aware or even power-aware) for a modern-sized CPU takes weeks just to run through some basic liveliness checks. See [1] for just a taste of gate-level simulation trickiness:

[1] https://www.deepchip.com/items/0591-01.html


You can do an RV32I in 10k gates, RV64GC in maybe 30k gates? I think the GP meant barely enough to run a thin OS, but not an antique. In-order, small to no cache, you get it.

A visual RV64GC would be a pedagogical tool, not something necessary for a tape out.


> Most don't

Is this true? I'm pretty sure that most of the regex engines I've used (grep, ripgrep, re2, hyperscan) use Thompson's construction or at least some NFA-based algorithm (not necessarily Thomopson's; Glushkov automata in particular are used by hyperscan).


Yes, all of those are finite automata based. Although grep usually uses the POSIX system regex library. GNU grep will use its own finite automata based approach for a subset (a large subset) of regexes.

But most regex engines are backtracking based. Perl, PCRE (used by PHP and probably others), Oniguruma/Onigmo (used by Ruby), Javascript, Java, Python. That covers a lot of ground.

Plus, popular reference web sites like https://www.regular-expressions.info/ are heavily biased toward regex engines that have features typically only implemented by backtracking engines.


There's also postgres's which descends from spencer's (Tcl Advanced Regular Expressions) and supports backreferences but IME is very hard to catch into catastrophic backtracking.

Possibly because you need to use the feature to trigger the NFA mode and the baseline test case for catastrophic backtracking doesn't, it just assumes the engine will backtrack.


GNU grep historical design discussion:

https://lists.freebsd.org/pipermail/freebsd-current/2010-Aug...

Modern GNU grep includes optional PCRE2 support. I incorrectly recalled that it skipped NFA->DFA conversion, but that maybe how something else like Go or re2 work in certain cases.

https://git.savannah.gnu.org/cgit/grep.git/tree/src

Most people in tech don't seem to grasp that there are very few compatible/identical regex formats. Hell, most people don't know what are the comprehensive list of code points characters classes include because they're poorly doc'd or undocumented. I had to write some scripts to find them for certain languages: Ruby, Python, and Rust. I advise people to never reinvent regex or Unicode parsing themselves because there are far too many security issues and edge cases that will inevitably become problems.


You're speaking to the author of Rust's regex engine.

> Hell, most people don't know what are the comprehensive list of code points characters classes include because they're poorly doc'd or undocumented. I had to write some scripts to find them for certain languages: Ruby, Python, and Rust.

Can you say more? All of the classes are documented here for the regex crate: https://docs.rs/regex/latest/regex/#syntax

Any not listed there are from Unicode and defined by Unicode.

> I advise people to never reinvent regex or Unicode parsing themselves because there are far too many security issues and edge cases that will inevitably become problems.

So what should have I done instead?

I generally advise people never to say "never reinvent something," because that stifles innovation and progress.

> Modern GNU grep includes optional PCRE2 support. I incorrectly recalled that it skipped NFA->DFA conversion, but that maybe how something else like Go or re2 work in certain cases.

Not quite sure what you're trying to say here, but see: https://news.ycombinator.com/item?id=33567129


Based purely on the name and a super quick scroll through the github page, it's probably the Lattice ice-series of chips, even more probably the ice40. There are a couple boards with ice40 chips out there that you can use fully OSS tooling for from synthesis to PnR to programming.


(Digital) chip designer here: this is super cool. What tech node are you targeting? Have you checked out any of the free open-source tools and PDK from Skywater, Google, eFabless and co?


I'm planning on using the chip ignite program! They can build the chip I need - the harder bit is figuring out the coatings needed (sputter coating post-processing is needed to be compatible with the chemistry).

I am pretty new to chip design but know the DNA space quite well - I would love to do a call, if possible, because I have lots of noob questions about chip design and chip design tools. Happy to share anything about the biological side in exchange!


The $1k figure quoted by OP is not indicative of the average price of licenses in my experience. There are plenty of tools that are $15k+ in the EDA world, and various engineers in chip design orgs are always battling about who gets to use them and when. There are whole teams in big SoC design shops dedicated to managing and procuring licenses.

I was pretty far removed from the license procurement and budgeting aspect of my last chip design job, but IIRC we were in the multi-millions per year in various EDA tool licenses. That figure may or may not have included IP licenses for pre-designed off-the-shelf subsystems.


Same. We pay millions of dollars for our simulator licenses. Same again for physical design/layout licenses. These are for the standard ('best') industry tools, no IP. No idea what you get for $1k.


I hadn’t even brought up IP licensing but you’re right. That’s another order of magnitude of cost and it’s incredibly important.


I almost got kicked out because I "looked like" someone who had vandalized something or other the day before. They pulled me out of class, called me names, made me sit in their little sad office (thus missing more class) and tried to get me to write a written confession. I refused and they said they would "prosecute me to the full extent of the law". The next day I was exonerated with no explanation given.

The two fully-grown adults who ran the security office were former police; judging by the 20+ department badges proudly displayed on their back wall, it looks like they'd been kicked out of every department. One wonders why!


> One wonders why!

I would actually wonder why, since from what I hear, what you describe is the standard tactic for police. It gets confessions, so they do it.


Police exchange patches with people they work with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: