Hacker Newsnew | past | comments | ask | show | jobs | submit | qaute's commentslogin

> I always wondered why you needed gearing mechanisms in a micro machine. Has there ever been a practical application for gears in MEMS?

IIRC, Sandia Lab's SUMMiT V process (the source of videos like [1]) was funded in part to make mechanical latches and fail-safes for nuclear weapons, but I'm not sure what's currently in use for obvious reasons. I don't think they found many other practical applications, though experimentation led to TI's DMD chips, among other things.

Occasionally, MEMS techniques are used to make (relatively large) gears for watches.

I've also seen people try to use gears for microfluidic pumps, but I don't think any are much better than current simpler solid-state approaches.

[1] https://www.youtube.com/watch?v=GiG5czNvV4A


Fascinating, thanks for the info. Some ideas about this:

(1) I wonder if you can make an unpickable lock with MEMS.

Say, if you get a finger print scan, or retinal scan, then the device would need a positive confirmation in order to unlock itself.

I have no idea how practical this is, but it sounds like some kind of Superman genetic authentication system, in order to unlock the information crystals.

(2) The other thing is, can the gears be used to store potential energy? Such as using the microfluidic pumps? Or a microspring?

Where maybe you can use another piezoelectric device, or solar, to provide the electricity to run the gears, in order to store potential energy during peak production hours.

Then, when you need it, you release the potential energy.

The key here might be if you can build a micro electric generator. But I don’t know if you can deposit a pair of opposing micro magnets on a MEMS unit.

But if this can work, then you would need a lot of units, in the tens of billions, in order to produce enough electricity to do something useful.


> I wonder if you can make an unpickable lock with MEMS

I'm not sure what you mean. MEMS are generally tiny and I'm not sure why you'd need a ~1mm safe? But MEMS relay switches, which mechanically connect/disconnect circuits, exist.

> The other thing is, can the gears be used to store potential energy?

Springs and fluid reservoirs aren't very energy or power dense; good batteries and capacitors are much more effective and reliable. MEMS flywheels have been built and are potentially competitive, but are also extremely tricky to build.

> The key here might be if you can build a micro electric generator.

This is doable and an area of active research (for, say, charging low-power devices when a human walks, definitely not grid-scale power). Magnets are hard to work with in MEMS, so other techniques (piezoelectricity, triboelectricity) are used. [1] is currently badly-written but mentions most important bits.

[1] https://en.wikipedia.org/wiki/Nanogenerator


> I wonder if you can make micro machines at this level? The MEMS thing.

At this size range, though state-of-the-art MEMS (mechanical vibrating frequency filters for RF receivers in phones, accelerometers) can have sub-100nm dimensions, basic accelerometers, pressure sensors, and inkjet heads are absolutely doable.

> Not with this PDK or process, no. MEMS processes are quite specialised.

But yeah, this is the problem. Although ICs and MEMS devices are made with similar tools, MEMS usually needs processing steps that don't play nicely with the steps in an IC process (e.g., etching away huge amounts of silicon to leave gaps and topography, or using processing temperatures and materials that mess up ICs). This SkyWater process cannot do MEMS.

A more general problem is that different MEMS devices often need different incompatible process steps, so a standardized process is infeasible (though http://memscap.com/products/mumps/polymumps tries).

However, there is a tiny chance that, if we get enough detail on the process steps and leeway in the design rules, a custom layout could implement a rudimentary accelerometer or something that works after post-processing (say, a dangerous HF bath), but only with intimate knowledge of said process steps (e.g., internal material stress levels) and a lot of luck.


There are other ways to make PCBs with a 3D printer:

Instead of etching copper, some people directly print the circuitry with a solder extruder [1]. The idea's been around for a while [2] but circuit complexity is usually very limited. Here's a guide [3] (to a similar method) that uses a hobbyist 3D printer.

To improve on the article's pen-masking method, you can mount a laser instead of a pen to the 3D printer and expose a specially coated PCB [4]. Or expose UV light through an LCD [5].

One can also just mount a milling tool to the printer and cut away the copper directly [6]. A 3D printer's not designed to take the forces from milling well, so similar but specially designed machines are made [7].

The most impressive methods are geared toward industry:

A cutting-edge industry-grade electronics 3D printer looks like [8]: an inkjet-style printer with conductive and insulating (dielectric) inks.

Somewhat related, you can use a laser etching and electroplating process ("3D Moulded Interconnect Devices" "3D MID") to make PCBs on 3D printed weirdly shaped (i.e., definitely not flat) surfaces. [9] is an impressive example, and definitely check out a search engine's image results [10].

[1] http://diy3dprinting.blogspot.com/2015/01/voxel8-conductive-... [2] http://blog.reprap.org/2009/04/first-reprapped-circuit.html [3] https://www.instructables.com/id/3D-Printing-3D-Print-A-Sold... [4] https://dangerouspayload.com/2017/12/17/pcb-uv-exposure-scan... [5] https://www.youtube.com/watch?v=vxl7glJMKOQ [6] https://www.instructables.com/id/PCB-Milling-Using-a-3D-Prin... [7] https://www.bantamtools.com/pcb-milling-machine [8] https://www.nano-di.com/dragonfly-pro-3d-printer [9] https://www.festo.com/group/en/cms/10157.htm [10] https://duckduckgo.com/?q=3d+moulded+interconnect+devices&t=...


I'll second (third?) LineageOS + microG! Full ROMs combining both are available [1] (OS updates often seem to have to be installed manually, but are supported on popular phones for years). In addition to using everything from the F-Droid repository, I can run Slack/Discord/Messenger/WhatsApp/GroupMe (though notifications are hit-or-miss) and a good few other apps (Duolingo, bank mobile apps, Uber but not Lyft). Many of these apps can be installed via Yalp Store [2], which is a frontend for the Google Play store with a built-in account to provide some anonymity. It'll do until I get a Librem5 [3].

TL;DR: no Google, most things work, but not necessarily smoothly. I've used it for a few years now.

[1] https://lineage.microg.org/ [2] https://f-droid.org/en/packages/com.github.yeriomin.yalpstor... [3] https://puri.sm/products/librem-5/


Lots more detail for the uninitiated:

The most complex parts of phones and computers (e.g., the CPU) are integrated circuits (ICs)[0], which are (nowadays) billions of nanometer-sized transistors on top of a piece of silicon. The way these are made is arguably the most complex, high precision manufacturing process in the world, and is usually done in multi-billion-dollar "fabs" (fabrication plant) by huge companies (e.g., Intel).

Even the most basic IC fabrication, like in the article, absolutely requires maybe ~5-10 complex tools (furnace, sputterer, etc; $1000-$10k each at current eBay prices and very low quality, if you know how to rebuild/fix all of them) and a host of supporting equipment (fume hood for seriously dangerous[1] chemical work, etc). To get reasonable results, you also need to understand the device physics and then test multiple times to get the process right. I have some serious respect for Sam Zeloof of the article for getting this to work: it's at least an order of magnitude more difficult than other home manufacturing (3D printing, woodworking, welding, sewing...), even if you're already an industry expert. And his device used 6 transistors; you'd need to get the transistor manufacturing reliability up significantly to make a useful microprocessor (instead of small analog circuits), which probably starts at several thousand transistors [2].

If (GP post) you want an automated device that makes an IC for you given a digital design file, well, hm. The closest things we have today are companies that manage and run the equipment for you (fabless semiconductor companies (Qualcom, AMD...[3]) give their chip designs to, e.g., TSMC to manufacture). Academic researchers often send parts in together to reduce costs[4], in which case you could get tens of identical (reasonably simple) chips for several thousand dollars. Someone linked to [5], which looks like an attempt at a more open, hobbyist-friendly version of the same thing. I did run across [6] once, which _does_ seem to be attempting to make an easier to use, very small, automated system. I've no idea what their status is.

A desktop device as simple to use as a 3D printer is barely even on the conceptual possibility level at the moment, and then only when people start talking sci-fi self-assembly and molecular nanomanufacturing and a century of R&D.

[0] https://en.wikipedia.org/wiki/Integrated_circuit [1] http://lnf-wiki.eecs.umich.edu/wiki/Piranha_Etch [2] https://en.wikipedia.org/wiki/Transistor_count [3] https://www.electronicsweekly.com/news/business/manufacturin... [4] https://www.mosis.com/what-is-mosis [5] https://libresilicon.com/ [6] https://www.minimalfab.com/en/


Thanks for the detailed response.


To elaborate:

A single IC die (a "chiplet" when stacked on others) is made of a layer of transistors and wiring up to several hundred nanometers thick on top of a much thicker (several hundred micrometer) silicon wafer substrate.

Yep, stacking these dies ("chiplets") on top of one another is one form of 3D, and is definitely useful: IIRC, AMD's recent popular Ryzen 3000 line uses chiplets, and the Raspberry Pi's RAM chip is stacked on top the CPU (in a much cruder post-IC-manufacturing assembly process). The shorter distance between dies (compared to placing them in separate plastic packages on a PCB) can lead to maybe an order of magnitude improvement (in speed/power/etc). It's hard to stack more than a few layers.

The most ambitious form of 3D design (and the one most people probably think of) is multiple thin layers of transistors on a single die ("monolithic 3D"); this would give maybe another order of magnitude improvement. Monolithic 3D memory chips are becoming popular (V-NAND, etc), with the most recent at ~100 layers. Monolithic 3D CPUs are still an unsolved problem because they need different, more difficult process steps and better heatsinks (but we're close!).


"Raspberry Pi's RAM chip is stacked on top the CPU"

Fyi, I don't think this is done in the most recent Pi 4


I can't imagine the kind of quality control you have to have to have to achieve a useful yield, performing a lithography operation one hundred times on the same piece of silicon.


Even 3D NAND is not the honest monolithic 3D, as it uses edge bonding and not real metal layers for word lines


Is HBM also stacked in layers? IIUC there's like four layers of memory on each HBM2 die


Yeah it's 3D-stacked. 4 layers for HBM, 4 for HBM and 12 for HBM2E.

It's neat but really expensive, and I don't think the gains for RAM in that sort of designs are that big compared to computing cores outside of edge cases.


Meant to say *8 for HBM2


Some other open Slack alternatives I know about include:

Matrix[1]: maybe the most impressive. A chat _protocol_ with multiple server and client implementations, gateways to everything, and just out of beta.

Zulip[2] (as mentioned), Rocket.Chat[3], Mattermost[4]---Slack clones. open source, developement open to pull requests to various degrees, can pay to have hosted in cloud or get support.

Non-free or qualitatively different solutions include Discord, MS Teams, and the original IRC, I guess.

Apologies for any inaccuracies or omissions; I'm not an expert in the space.

[1] https://matrix.org/ [2] https://zulipchat.com/ [3] https://rocket.chat/ [4] https://mattermost.com/


None of these will:

- "replace email"

- "increase productivity"

- "centralize communication"

None of them. Not a damn one. I've worked at over 20 startups, there has never been a single case where any of these claims are true. They purchase one of these damn chat clients, and it becomes the biggest timesink the entire organization has every. single. time.

But, you know what? You can't get rid of it. Everyone has it. Nobody wants to change off of it. So what do they do? Introduce other methods of communication through other apps. So no. It doesn't reduce it. If anything it increases the noise.

Also, you think you make the right choice in the beginning. It seems great, your 7-50 people organization works fine. Wait until it's over 300 people, 1000 people. It becomes an absolute nightmare.

Slack is probably the worst for 1000 people organizations. It was horrible at 150, then at 300 you start seeing people fight each other over rooms.


Honestly, chat at an organization only seems to replace phone calls, and only partially. Which is not a bad thing, but I'm not sure it's even a good idea to try to replace email.


Question about the requirements of those alternatives (and slack i suppose). Zulip for example says if you have > 100 users you need a machine with 4GB RAM and 2 CPUs. Why does a chat app need a lot of processing power? I guess most of the messages are simple text, and IRC could handle thousands of users ages ago. What is the processing used for ?


The Zulip server uses very little CPU, but the RAM is important. (I work on Zulip.)

Here's what the docs say, for reference (excerpt of https://zulip.readthedocs.io/en/latest/production/maintain-s... ):

> For an organization with 100+ users, it’s important to have more than 4GB of RAM on the system. Zulip will install on a system with 2GB of RAM, but with less than 3.5GB of RAM, it will run its queue processors multithreaded to conserve memory; this creates a significant performance bottleneck.

> chat.zulip.org, with thousands of user accounts and thousands of messages sent every week, has 8GB of RAM, 4 cores, and 80GB of disk. The CPUs are essentially always idle, but the 8GB of RAM is important.

As a practical matter, I think 4GB of RAM is not a lot to ask for a service that 100+ users are actually concurrently using all day. That's a small fraction of the RAM the clients are consuming; and you can get a suitable cloud machine from Digital Ocean (simpler pricing than AWS, so good for a quick price check) for $20 USD/mo.

On the implementation side, it turns out that a lot of moving parts go into a full-featured chat app. Here's a partial architecture diagram, plus detailed exposition: https://zulip.readthedocs.io/en/latest/overview/architecture... Database, plus caches, plus code for lots of features x running in a number of processes, adds up to a few gigs of memory.


Well my question is why they consume so much memory/CPU? What makes these services so different from earlier chat systems? Are these users continuously doing video chat with each other or keep sockets open? For simple text chat, 4GB of RAM seems absurdly high, considering how irc was able to handle thousands of users 3 decades ago.


IRC is essentially stateless. The server doesn't remember what messages people have sent; so it can't tell you what was said last night when you come online in the morning, or what was just said in some channel you weren't previously listening to. That work gets pushed out to clients, and to add-on services.

The IRC server also doesn't store images or any other kind of file people want to show each other. There are lots of good practical reasons to want to share images in a conversation (e.g., screenshots), plus of course silly GIFs. That work also gets pushed out to add-on services.

When people say here that Slack or Zulip etc. are a much better user experience than IRC, I think those two things -- message history, and images -- are major reasons for that.

Message history means a database that gets big, and images mean a lot of data too. There's a large working set of both of those that you want fast access to. That means providing adequate RAM.


Digital notebooks (Electronic Lab Notebooks --- ELNs) exist, but haven't caught on yet.

Many researchers often need to be able to note down arbitrary diagrams, not just text, in real time, which pretty much means low-latency tablet with stylus. This became feasible in just the last decade or so.

Many ELNs do implement published hashes for verification ("trusted timestamping").


I personally understand better my own ideas (it souds weird i know) when i draw graphics and diagrams, that can be done quickly and easily on paper.


According to the linked statement from the UC Office of Scholarly Communications [1], "No matter what happens moving forward, UC scholars will still be able to use ScienceDirect to access most articles published prior to January 1, 2019 because UC has permanent access rights to them."

[1] https://osc.universityofcalifornia.edu/open-access-at-uc/pub...

Edit: dannykwells beat me to it, and in more detail.


Usually, CAD programs are written for just one of several domains, outside of which they don't work as well:

- Fusion360/SolidWorks/Onshape/Catia/NX are engineering CAD programs. They are designed to make geometrically (relatively) simple parts with exacting dimensions (think vector SVG vs raster PNG) that will fit in mechanical assemblies with other parts. Models (e.g., airplanes) are often simulated for strength (FEA) and fluid flow (CFD). There are no good open source versions (FreeCAD is trying).

- SketchUp/AutoCAD are for architectural and civil purposes. These create buildings, which often have even simpler geometric features but multiple floors and plumbing and electrical runs and HVAC and other layers. I don't know too much about these.

- Blender/Maya/3dsMax are for "artistic" purposes. They can sculpt very complex shapes from triangular approximations, but are can't hold exact dimensions very well (raster PNG vs vector SVG). They are used for computer graphics (movies, games). I'm not an expert in these, either.

I tend to design robots with Fusion, and it's definitely better for my purposes than SketchUp (no easy watertight meshes, assemblies, simulation), but know people who work in different domains who therefore use other programs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: