We use Apigee's Edge product. It provides API management tools like authentication, authorization, rate limiting, etc before the request hits your actual API. Its a pretty good product, if that isn't your core competency.
It's not a lot of people's core competency, which is exactly what makes API gateways so useful.
This way anyone can write some half-baked endpoint that returns some JSON, and you put Apigee or any of its competitors in front of it, taking care of the hard stuff like authorization, rate limiting, etc, without which you can have a really rough time.
Auth is a routine job, only a really silly developer manages to make simple token auth vulnerable. There's no a "tradeoff" in leaving auth to MitM because it's "hard", oh also there's bunch of libraries out there doing it for you on your servers.
I write C code for embedded devices and haven't been able to use malloc in 6-ish years. Everything is either statically or stack allocated. It changes how perceive memory usage and allocation.
For example, it used to bug me that I had to statically allocate memory to manage an event that was used for maybe 0.1% of the product's life. It seemed so inefficient. Yes, you could potentially co-opt the event and use it elsewhere, but then you had to deal with a bunch of other considerations: could they ever run at the same time?, would the code be maintainable?, etc.
Or the other day I accidentally set a function scoped buffer's size too large and it cause a stack pointer overflow. That was a pain to debug, because the exception happened "before" the function started running (in the function call preamble). From the debugger, it looked like the return of the previous function caused the issue.
I don't understand a large part of the brouhaha. In a sense, static memory allocation feels much more natural than bunch of mallocs in random places around the code - to me.
Maybe it's because quite many years ago I used to do some very narrow scope algorithms in C for some code that needed high reliability.
How could such code even have mallocs? You would need to then know that at no point the memory requested would be greater than memory available. So if you already know that, why not allocate the memory beforehand?
Your example, a feature used for 0.1% of product life. Certainly you couldn't tolerate that the software hung to a memory error 0.1% of the time? Hence if the product supports the feature, then it must always work, and thus memory must always be reserved for it. I don't understand how it would really work otherwise.
The other end of the spectrum, the "modern way" even in high performance tasks, with reckless java object creation and jvm memory shuffling almost makes me sick. Then everybody's tuning the garbage collectors to avoid constant full gc. It's heuristics. Oracle makes drastic changes to the GC defaults between minor releases.
While there perhaps aren't maybe many direct memory bugs in such code, the code does become very unpredictable.
Then to fix that it's again going to preallocated static pools and primitives and self implemented cache cleaning and what else, and you've lost quite a lot of your java style.
The avr-libc at least comes with a malloc (rather, heap) implementation. It's mostly a trap for new embedded programmers who haven't yet internalized that you shouldn't touch dynamic memory allocation with a stick on these kind of processors.
(Embedded programming is really quite nice to teach clean C, you learn to appreciate what the cost of library functionality is, that you can't just start using floats, that on some architectures updating a 32 bit integer is a non-atomic operation spanning 5 or more clock cycles..)
AVR-GCC is really quite a hack. It abuses types right and left (float? double? int? That's not how i knew you!). And the compiler thinks it works on a 16 bit machine, which often yields weird code.
Unlike Java for example, C is very lax about what the different types mean. In particular, there is no guarantee about the size of int/double except that it is at least as large as short/float respectively. Some implementations of the standard library impose a requirement that DBL_DIG in float.h be no less than 10, which avr-gcc fails to satisfy, but I don't believe the core language imposes that requirement.
I was delighted to start programming in java way back when just for this reason. Writing truly portable C code really requires #defines for all the basic types, which is really ugly.
Anyone doing embedded code that is pretty close to the metal will have to avoid using malloc. I also write C code for embedded devices and also haven't use malloc for the past 15 years.
a bit off-topic, but Zigbee came up in a discussion with a friend recently when talking about adding wifi functionality to PLCs. he mentioned that Zigbee devices are pretty much unusable in existing Wifi deployments because they run over the same bands/frequencies. can you comment on this at all? thanks!
It's not unusual. I did some work in civil avionics, and we were not permitted to use dynamic memory allocation due to concerns over heap fragmentation.