Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Employees may be leaving the embedded space

Embedded opportunities have been slowly shrinking for years. For whatever combination of reasons, a lot of employers think that embedded work is easy or otherwise doesn’t require a large budget.

It’s increasingly bizarre to get a well-designed IoT device with a very polished mobile app and web UI, then struggle with hardware factory resets and firmware upgrades because the embedded side of the product didn’t get the same level of attention.

It’s like embedded somehow became an afterthought in the industry. Perhaps because it’s the only part of the system that doesn’t have a highly polished UI layered on top of it? Over the past decade I’ve witnessed multiple companies over focus on anything that goes well into slide decks (UX mock-ups, animations, etc.) or generates vanity metrics (number of backend servers, requests per second to the cloud) while ignoring anything that doesn’t have a visual pop to it (embedded firmware)



"Programming is just typing" was a typical management refrain when I was in the embedded field (more properly, now I'm adjacent to it). It was frustrating. Computer scientists and programmers aren't Real(tm) Engineers so they don't deserve as much money. They can't be in charge because you can't have engineers answering to non-engineers (ie, very few leads let alone managers coming from the software side). Which leads to a culture that's overly hardware centric with insufficient leadership/management understanding of what software actually entails.

Also, "This doesn't meet the current requirements, fix it with software!" The best one was when the case wasn't waterproof... How the fuck is software supposed to fix that? They literally expected the software team to work magic. A lot of pushback got that requirement kicked back over to the mechanical engineering team to address, but it took months. Moronic.


It might also be that the management chains in embedded are likely former engineers or EE people. In Web you get a lot of management interfaces where the boss can't do their job: that's the perfect recipe for high pay.

In embedded and other non-software engineering but engineering firms, the management is typically engineers that CAN do their subordinates jobs, they just don't want to.


That's what I was getting at with this:

> They can't be in charge because you can't have engineers answering to non-engineers (ie, very few leads let alone managers coming from the software side). Which leads to a culture that's overly hardware centric with insufficient leadership/management understanding of what software actually entails.

In hardware-centric orgs, software developers are a small step above technicians in their pecking order, sometimes below. The pecking order itself is annoying enough, but when you switch from designing your own ASICs to buying COTS dev boards and primarily only adding software to it, you're not really a hardware company anymore. But it'll take another generation for them to realize it, or a severe crunch if someone comes along and realizes that they can pay embedded devs $200k+ and eat the lunch of half these companies.


Many hardware companies still see software as just another line item on the BOM: Like a screw or a gasket. It's something you build cheaply or buy from a supplier and sprinkle it on the product somewhere on the assembly line. These hardware companies have no concept of typical measures of software quality, building an ecosystem, release management, sometimes even no concept of source control. They tell an overworked embedded software engineer: "Go build something that barely meets these requirements, ship it, and then throw the scrap away." like metal shavings from their CNC machine.


At a previous company our firmware was literally called by a part number. So I would regularly work on the repos 5400-520, 5400-521, 5400-524, 5400-526, etc.


I remember an embedded company I joined; when I asked how they manage releases, the eng manager said, "well, we find an engineer who has a copy of the source code that can successfully build with no errors, copy it off their workstation (debug symbols and all), and send it to the factory to get flashed onto devices." Total clown show.


One large phone manufacturer maintained a room full of PCs, each machine configured to build for one model of phone.

I don't know if they kept backups. I wouldn't take that bet.


'sometimes we check it into sourcesafe' in 2018...


Thanks for bringing up weird memories. I remember software not having a name and version but a part number. As if it wasn’t living and evolving as it needed to be as a networked firmware.


Assigning a part number to firmware is perfectly normal. It's part of the Bill of Materials for the product.

What is not normal is referring to that part number anywhere except on the BoM.


Perhaps I'm missing some deeper use case here. More complicated firmware projects can have only part of the system loaded during production, namely the bootloader and some system level image(s). The firmware that has all of the business logic can be pushed/pulled when a customer activates it much later on. How would a part number for this image (or really set of images) be useful?


In your first case, imagine that you have a contract manufacturer that is told to build something according to a particular Bill of Materials. You change the firmware and assign it a new part number (or assume that the version is embedded in the part number). Internally, the BOM is updated with this new part# and as part of your process, the manufacturer is sent the new BOM. Manufacturer goes to build the product and discovers that the firmware they have is a different part number than on the BOM. If not for this, they'd be building with the wrong firmware version.

In your second case, if the only person loading it is the customer, a part number may not solve anything other than the business managing inventory. However, if you're already in the habit of assigning part numbers to everything you build (I have come to be a big advocate of this), then it really is just part of the process.

I've seen a mix of both: there is a standard firmware version for the hardware combined with a set of customer customizations. In this situation, not having a unique part number for each combination (of firmware + customer config) resulted in confusion, angry customers and a manufacturing department having no idea exactly what it was that they were supposed to be building.

Yes, there are other ways of solving these problems but assigning unique numbers works well enough.


> typical measures of software quality

To play devil's advocate - are there any (useful) measures of software quality? Even this place is mostly programmers and we can't even agree whether we should be writing unit tests or not.


Sort of. There are accurate measures with verifiable predictive power. But useful depends on cost/benefit, which in turn depends on ability to implement and market forces.

There's a company that looked at reducing critical defects from a sort of actuarial perspective. They have a few decades of cross-industry data. I've used their model, and it works. If you don't need a numerical result, you can just read the white paper about what's most important [1].

So to partially answer your question: unit testing reduces defects, but reducing defects might not be worth the costs to you.

And defects might not be the only thing that matters. There are other measures of goodness, like maintainability, which complicates the answer. You'd have to collect your own data for that.

[1] https://missionreadysoftware.com/articles/


I’d say for micro services and large distributed system, you do need a pyramid of testing with most covered at the unit level. The system is just too large and continuously changing as all the different versions of services release.


this is grimly funny to me because where I work, software is a literal line item in the manufacturing BOM, each release gets a part number and is physically shipped to the factory

it makes some sense, but the company mindset about the role of software is very clear


One thing I came to see working in both web and embedded for two decades now: a lot of embedded developers often miss the “product” side of what they are building. This probably doesn’t explain the lower pay, but it might be a reason why embedded overall doesn’t get the recognition it deserves: the embedded engineers don’t know how to communicate their value / provide more value to the business.

This is becoming increasingly important as you well note, where devices are all connected, and things like setup and updating and connectivity are crucial. Designing not only a robust, but a user-friendly firmware update process is actually a lot more work than just building a bootloader: you need to communicate to the user, in realtime, what is going on. Cancelling an action needs to be immediate and provide feedback on the process of the cancelling. Error handling needs to provide useful information, and probably a special UX.

These do need to be factored into the embedded software right from the start, because they significantly increase the complexity, and it’s extremely easy for management to miss how crucial that part is. I keep a few horrible chinese consumer electronics devices on hand (webcam, mp3 player, mobile phone) to show what I mean. The only difference between an ipod touch and a noname mp3 player with touchscreen is… the software.

Having to press 3 inaccessible buttons, connect a USB volume named “NO NAME”, have it hang for 2 minutes when unmounting, then show a black screen for 3 more minutes, before showing … that it didn’t update, vs a smoothly progressing update progress bar showing the steps, the devices showing up in my online dashboard as soon as it reboots, that’s what my value as an embedded engineer is.


There was a time in the late 90s/early 2000s where this happened to driver development on the (Classic) Mac. Companies would make some USB device and get a reasonable driver made for Windows (I assume - I wasn't using Windows at the time). Then they would say, "Well, MacOS is 10% the market of Windows, so we'll pay 1/10th for someone to develop a driver for this." But it turned out that USB worked completely differently on the Mac from how it did on Windows, so none of the Windows code was relevant at all for the Mac devs. They would either get what they paid for (which was terrible for users) or they would not get a Mac driver. This is around the time I stopped buying any device that required installing a driver. Many of these devices didn't really need one because they were regular USB-spec devices (keyboards, scanners, etc.) To this day, I will not install a driver for a fucking mouse. Why would that be required?


> It’s increasingly bizarre to get a well-designed IoT device with a very polished mobile app and web UI, then struggle with hardware factory resets and firmware upgrades because the embedded side of the product didn’t get the same level of attention.

Why? The issue is that you have to actually ... you know ... PLAN when you have an embedded device.

You can churn the app and the web and the backend infinitely so there is no penalty for doing so. If you take that attitude and apply it to embedded you wind up with an expensive pile of scrap.


Yeah, I sorta specialize in that whole IoT firmware update/fleet monitoring/make sure everything at the edge runs smoothly end of things, and if you find a company that realizes that this is something that MUST work smoothly if they're going to scale, then it's a very sweet place to be. Even better, that sorta work combines low-level C++ with lots of back-end web service work, so you're never 'just' a C++ programmer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: