Hacker Newsnew | past | comments | ask | show | jobs | submit | wucke13's commentslogin

My take: containers forced devepopers to declare various aspects of the application in a standardized, opinioated way:

- Persistant state? Must declare a volume. - IO with external services? Must declare the ports (and maybe addresses). - Configurable parameters? Must declare some env variables. - Trasitive dependecies? Must declare them, but using a mechanism of your choosing (e.g. via the package manager of your base image distro).

Separation of state (as in persistency) and application (as in binaries, assets) makes updates easy. Backups also.

Having most all IO visible and explicit simplifies operation and integration.

And a single, (too?!?) simple config mechanism increases reusability, by enabling e.g. lightweight tailoring of generic application service containers (such as mariadb).

Together this bunch of forced, yet leaky abstractions is just good enough to foster immense reuse & composability on to a plethora of applications, all while allowing to treat them almost entirely like blackboxes. IMHO that is why OCI containers became this big, compared to other virtualization and (application-) cuntainer technologies.


Yeah. I remember when "okay, cool, how do we deploy, run, maintain the thing we (our leadership) just bought from you" was several weeks of meetings with the vendor and discussing awful "human-readable" documentation (if it existed).

Those meetings still happen sometimes, but "cloud" *aaS and containers really put a dent in that sort of thing.


Yep, this is why. Containers are a way to package more of the OS environment than you can do otherwise


I know quite some people in the safety/aviation domain that kind of dislike the subranges, as it inserts run-time checks that are not easily traceable to source code, thus escaping the trifecta of requirements/tests/source-code (which all must be traceable/covered by each other).

Weirdly, when going through the higher assurance levels in aviation, defensive programming becomes more costly, because it complicates the satisfaction of assurance objectives. SQLite (whiches test suite reaches MC/DC coverage which is the most rigorous coverage criterion asked in aviation) has a nice paragraph on the friction between MC/DC and defensive programming:

https://www.sqlite.org/testing.html#tension_between_fuzz_tes...


Ideally, a compiler can statically prove that values stay within the range; it's no different than proving that values of an enumeration type are valid. The only places where a check is needed are conversions from other types, which are explicit and traceable.


If you have

    let a: u8 is 0..100 = 1;
    let b: u8 is 0..100 = 2;
    let c = a + b;
The type of c could be u8 in 0..200. If you have holes in the middle, same applies. Which means that if you want to make c u8 between 0..100 you'd have to explicitly clamp/convert/request that, which would have to be a runtime check.


In your example we have enough information to know that the addition is safe. In SPARK, if that were a function with a and b as arguments, for instance, and you don't know what's being passed in you make it a pre-condition. Then it moves the burden of proof to the caller to ensure that the call is safe.


But obviously the result of a + b is [0..200], so an explicit cast, or an assertion, or a call to clamp() is needed if we want to put it back into a [0..100].

Comptime constant expression evaluation, as in your example, may suffice for the compiler to be able to prove that the result lies in the bounds of the type.


That's pohibitively expensive in the general case when external input is used and/or when arithmetic is used on the values (main differerence to sum-types).


But if the number type’s value can change at runtime as long as it stays within the range, thus may not always be possible to check at compile time.


The branch of mathematics you need to compute the bounds of the result of an operation is called Interval Arithmetic [1]. I'm not sure of where its limits are (hah), but at the very least it provides a way to know that [0,2] / [2,4] must be within [0,1].

I see there's some hits for it on libs.rs, but I don't know how ergonomic they are.

[1] https://en.wikipedia.org/wiki/Interval_arithmetic


This is basically excuses being made by C people for use of a language that wasn't designed for and isn't suitable for safety critical software. "We didn't even need that feature!"

Ada's compile time verification is very good. With SPARK it's even better.

Runtime constraints are removable via Pragma so there's no tradeoff at all with having it in the language. One Pragma turns them into static analysis annotations that have no runtime consequences.


I like how better more reliable code is more expensive to certify and the problem is the code and not the certification criteria/process being flawed.


> as it inserts run-time checks that are not easily traceable to source code

Modifying a compiler to emit a message at every point that a runtime check is auto-inserted should be pretty simple. If this was really that much of an issue it would have been addressed by now.


Neither the Rust nor the Ada spec is formal, in the sense of consumable by a theorem prover. AFAIK for Ada Spark, there is of course assumptions on the language semantics built-in to Spark, but: these are nowhere coherently written down in a truly formal (as in machine-readable) spec.


There was also Larch/Ada [0], which was a formally proved subset of Ada, developed for NASA [1].

[0] https://apps.dtic.mil/sti/tr/pdf/ADA249418.pdf

[1] https://ntrs.nasa.gov/citations/19960000030


What even is the most complex programming language with a fully machine-checkable spec? Are there actually practically useful ones? I know of none.



For the seL4 proofs a subset of C was formalized, for example.

(Already mentioned) CakeML would be another example, together maybe with its Pancake sibling.

Also: WebAssembly!


There's a formally verified C compiler, IIRC the frontend isn't, but if you define the language to the structs that are in the inputs to whatever is formally verified I guess whatever C like dialect of a language it implements must be.


I think what parent was about is open vs. closed loop control, not fly-by-wire or not. Both their and your point stand of course.


I have dedicated a large chunk of my (arguably short) professional career on improving upon this, mostly in the safety critical software domain. What was your experience back then, what made you leave ultimately, and what do you do now?


Rust in particular with miri is quite impressive at catching them. You just run your testcases via

    cargo miri run
And if your code actually touches UB, mirei will most likely point out exactly where and why.


It doesn't necessarily, but it can. Genode/SculptOS is kind of a microkernel OS framework, and it can use seL4 as the kernel.

Here is a talk about that porting effort:

https://m.youtube.com/watch?v=N624i4X1UDw


Gernot Heiser would strongly disagree with you on the last one :D


I'm not so sure, I thought "patch" originated from hole punching cards to program stuff. A software patch was literally a patch of tape that hides an errorneously punched hole in such a card.

The term patch-cable seems to be way younger.


https://www.merriam-webster.com/dictionary/patchboard

patchboard

: a switchboard in which circuits are interconnected by patch cords

First Known Use

1934, in the meaning defined above


I find it far more likely that patches--in terms of fixing problems with small, targeted changes--derives from the use of the term for fixing holes in cloth by sewing on another piece of cloth.


Everything is about patching up clothes or other things. I was just commenting on the “patch-cable seems to be way younger” remark.


Hence patch cable.


https://www.pe0sat.vgnet.nl/sdr/iq-data-explained/

This is an excellent introduction to the concept and also to the why complex numbers are used to represent signal samples.


I prefer a more "physical" explanation - you have two carriers: sin(wt) and cos(wt), and you're modulating bits I and Q onto the two carriers and adding them up before transmitting. Now, mathematically, that's the same as representing the two bits as I+jQ and multiplying it with cos(wt)+jsin(wt). Demodulation is simply multiplying that output with the complex conjugate cos(wt)-jsin(wt), which in physical terms translates to mixing with a local oscillator output and low pass filtering.


Why would you want two carriers?


Twice as much information.

My go-to for I/Q is: Having two allows you to represent negative frequencies. With a normal, real signal, this is of course impossible (negative frequencies will automatically mirror the positive ones), but if you have a signal centered around e.g. 1 MHz, there's room for above-1MHz and below-1MHz to be meaningfully different. And _that_ allows you to get a complex signal (I/Q), once you pull the center down to 0 Hz for convenience of calculation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: