Hacker Newsnew | past | comments | ask | show | jobs | submit | zik's commentslogin

As a fan of Algol 68, I'm pretty excited for this.

For people who aren't familiar with the language, pretty much all modern languages are descended from Algol 60 or Algol 68. C descends from Algol 60, so pretty much every popular modern language derives from Algol in some way [1].

[1] https://ballingt.com/assets/prog_lang_poster.png


Yes, massively influential, but was it ever used or popular?, I always think of it as sort of the poster child for the danger of "design by committee".

Sure it's ideas spawned many of today's languages, But wasn't that because at the time nobody could afford to actually implement the spec. So we ended up with a ton of "algols buts" (like algol but can actually be implemented and runs on real hardware).


Yes, for example UK navy had a system developed in Algol 68 subset.

https://academic.oup.com/comjnl/article-abstract/22/2/114/42...


Used extensively on Burroughs mainframes.


Wow, The Burroughs large system had special instructions explicitly for efficient algol use. You could almost say it was algol hardware. but algol 60 not 68.

https://en.wikipedia.org/wiki/Burroughs_Large_Systems

There is a large system emulator that runs in a browser, I did not get any algol written but I did have way to much fun going through the boot sequence.

https://www.phkimpel.us/B5500/webUI/B5500Console.html


Burroughs used an Algol60 derivative (not '68)


ESPOL initially, which evolved into NEWP.


ESPOL was (is?) simply a version of the standard Algol compiler that let you do 'system' sorts of things.

The Burroughs large systems architecture didn't really protect you from yourself, system security/integrity depended on only letting code from vetted compilers run (only a compiler could make a code file, and only a privileged person could make a program a compiler) - so the Algol 60 compiler made code that was safe, Espol could make code that wasn't, could do things a normal user couldn't - you kept the espol compiler somewhere safe away from the students ....

(there was a well known hole in this whole thing involving mag tapes)


As mentioned it evolved into NEWP, and you can get all the manuals from Unisys, as they keep selling it.

Given its architecture, it is sold for batch processing systems where security is paramount.

Yes, ESPOL and NEWP, being one of the first systems languages with UNSAFE code blocks, a binary that is compiled having unsafe is tainted and requires administrator configuration before being allowed to execute by the system.

One cannot just compile such code and execute it right away.


I would argue C comes from Algol68 (structs, unions, pointers, a full type system etc, no call by name) rather than Algol60


C had 3 major sources, B (derived from BCPL, which had been derived from CPL, which had been derived from ALGOL 60), IBM PL/I and ALGOL 68.

Structs come from PL/I, not from ALGOL 68, together with the postfix operators "." and "->". The term "pointer" also comes from PL/I, the corresponding term in ALGOL 68 was "reference". The prefix operator "*" is a mistake peculiar to C, acknowledged later by the C language designers, it should have been a postfix operator, like in Euler and Pascal.

Examples of things that come from ALGOL 68 are unions (unfortunately C unions lack most useful features of the ALGOL 68 unions. which are implicitly tagged unions) and the combined operation-assignment operators, e.g. "+=" or "*=".

The Bourne shell scripting language, inherited by ksh, bash, zsh etc., also has many features taken from ALGOL 68.

The explicit "malloc" and "free" also come from PL/I. ALGOL 68 is normally implemented with a garbage collector.


C originally had =+ and =- (upto and including Unix V6) - they were ambiguous (a=-b means a= -b? or a = a-b?) and replaced by +=/-=

The original structs were pretty bad too - field names had their own address space and could sort of be used with any pointer which sort of allowed you to make tacky unions) we didn't get a real type system until the late 80s


ALGOL 68 had "=" for equality and ":=" for assignment, like ALGOL 60.

Therefore the operation with assignment operators were like "+:=".

The initial syntax of C was indeed weird and it was caused by the way how their original parser in their first C compiler happened to be written and rewritten, the later form of the assignment operators was closer to their source from ALGOL 68.


Yeah if you ever wondered why the fields in a lot of Posix APIs have names with prefixes like tm_sec and tm_usec it's because of this misfeature of early C.


> it should have been a postfix operator, like in Euler and Pascal.

I never liked Pascal style Pointer^. As the postfix starts to get visually cumbersome with more than one layer of Indirection^^. Especially when combined with other postfix Operators^^.AndMethods. Or even just Operator^ := Assignment.

I also think it's the natural inverse of the "address-of" prefix operator. So we have "take the address of this value" and "look through the address to retreive the value."


The "natural inverse" relationship between "address-of" and indirect addressing is only partial.

You can apply the "*" operator as many times you want, but applying "address-of" twice is meaningless.

Moreover, in complex expressions it is common to mix the indirection operator with array indexing and with structure member selection, and all these 3 postfix operators can appear an unlimited number of times in an expression.

Writing such addressing expressions in C is extremely cumbersome, because they require a great number of parentheses levels and it is still difficult to see which is the order in which they are applied.

With a postfix indirection operator no parentheses are needed and all addressing operators are executed in the order in which they are written.

So it is beyond reasonable doubt that a prefix "*" is a mistake.

The only reason why they have chosen "*" as prefix in C, which they later regretted, was because it seemed easier to define the expressions "*++p" and "*p++" to have the desired order of evaluation.

There is no other use case where a prefix "*" simplifies anything and for the postfix and prefix increment and decrement it would have been possible to find other ways to avoid parentheses and even if they were used with parentheses that would still have been simpler than when you have to mix "*" with array indexing and with structure member selection. Moreover, the use of "++" and "--" with pointers was only a workaround for a dumb compiler, which could not determine by itself whether it should access an array using indices or pointers. Normally there should be no need to expose such an implementation detail in a high-level language, the compiler should choose the addressing modes that are optimal for the target CPU, not the programmer. On some CPUs, including the Intel/AMD CPUs, accessing arrays by incrementing pointers, like in the old C programs, is usually worse than accessing the arrays through indices (because on such CPUs the loop counter can be reused as an index register, regardless of the order in which the array is accessed, including for accessing multiple arrays, avoiding the use of extra registers and reducing the number of executed instructions).

With a postfix "*", the operator "->" would have been superfluous. It has been added to C only to avoid some of the most frequent cases when a prefix "*" leads to ugly syntax.


> You can apply the "*" operator as many times you want, but applying "address-of" twice is meaningless.

This is due to the nature of lvalue and rvalue expressions. You can only get an object where * is meaningful twice if you've applied & meaningfully twice before.

    int a = 42;
    int *b = &a;
    int **c = &b;
I've applied & twice. I merely had to negotiate with the language instead of the parser to do so.

> and all these 3 postfix operators can appear an unlimited number of times in an expression.

In those cases the operator is immediately followed by a non-operator token. I cannot meaningfully write a[][1], or b..field.

> The only reason why they have chosen "*" as prefix in C, which they later regretted, was because it seemed easier to define the expressions "++p" and "p++" to have the desired order of evaluation.

It not only seems easier it is easier. What you sacrifice is complications is defining function pointers. One is far more common than the other. I think they got it right.

> With a postfix "*", the operator "->" would have been superfluous.

Precisely the reason I dislike the Pascal**.Style. Go offers a better mechanism anyways. Just use "." and let the language work out what that means based on types.

I'm offering a subjective point of view. I don't like the way that looks or reads or mentally parses. I'm much happier to occasionally struggle with function pointers.


I do not think that is what they meant.

**c is valid but &&b makes no sense.


Some languages do define &&b, like Rust, where its effect is similar to the parent post's C example: it creates a temporary stack allocation initialized with &b, and then takes the address of that.

You could argue this is inconsistent or confusing. It is certainly useful though.

Incidentally, C99 lets you do something similar with compound literal syntax; this is a valid expression:

    &(int *){&b}


> The only reason why they have chosen "" as prefix in C, which they later regretted, was because it seemed easier to define the expressions "++p" and "*p++" to have the desired order of evaluation.

There has been no shortage of speculation, much of it needlessly elaborate. The reality, however, appears far simpler – the prefix pointer notation had already been present in B and its predecessor, BCPL[0]. It was not invented anew, merely borrowed – or, more accurately, inherited.

The common lore often attributes this syntactic feature to the influence of the PDP-11 ISA. That claim, whilst not entirely baseless, is at best a partial truth. The PDP-11 did support pre-increment and post-increment indirect address manipulation – but notably lacked their symmetrical complements: pre-increment and post-decrement addressing modes[1]. In other words, it exhibited asymmetry – a gap that undermines the argument for direct PDP-11 ISA inheritance, i.e.

  MOV (Rn)+, Rm

  MOV @(Rn)+, Rm

  MOV -(Rn), Rm

  MOV @-(Rn), Rm
existed but not

  MOV +(Rn), Rm

  MOV @+(Rn), Rm

  MOV (Rn)-, Rm

  MOV @(Rn)-, Rm
[0] https://www.thinkage.ca/gcos/expl/b/manu/manu.html#Section6_...

[1] PDP-11 ISA allocates 3 bits for the addressing mode (register / Rn, indirect register (Rn), auto post-increment indirect / (Rn)+ , auto post-increment deferred / @(Rn)+, auto pre-decrement indirect / -(Rn), auto pre-increment deferred / @-(Rn), index / idx(Rn) and index deferred / @idx(Rn) ), and whether it was actually «let's choose these eight modes» or «we also wanted pre-increment and post-decrement but ran out of bits» is a matter of historical debate.


The prefix "*" and the increment/decrement operators have been indeed introduced in the B language (in 1969, before the launch of PDP-11 in 1970, but earlier computers had some autoincrement/autodecrement facilities, though not as complete as in the B language), where "*" has been made prefix for the reason that I have already explained.

The prefix "*" WAS NOT inherited from BCPL, it was purely a B invention due to Ken Thompson.

In BCPL, "*" was actually a postfix operator that was used for array indexing. It was not the operator for indirection.

In CPL, the predecessor of BCPL, there was no indirection operator, because indirection through a pointer was implicit, based on the type of the variable. Instead of an indirection operator, there were different kinds of assignment operators, to enable the assignment of a value to the pointer, instead of assigning to the variable pointed by the pointer, which was the default meaning.

BCPL has made many changes in the syntax of CPL, whose main reason was the necessity of adapting the language to the impoverished character set available on American computers, which lacked many of the characters that had been available in Europe before IBM and a few other US vendors have succeeded to replace the local vendors, also imposing thus the EBCDIC and later the ASCII character sets.

Several of the changes done between BCPL and B had the same kind of reason, i.e. they were needed to transition the language from an older character set to the then new ASCII character set. For instance the use of braces as block delimiters was prompted by their addition into ASCII, as they were not available in the previous character set.

The link that you have provided to a manual of the B language is not useful for historical discussions, as the manual is for a modernized version of B, which contains some features back-ported from C.

There is a manual of the B language dated 1972-01-07, which predates the C language, and which can be found on the Web. Even that version might have already included some changes from the original B language of 1969.


* was the usual infix multiplication operator in BCPL, and it was not used for pointer arithmetic.

The BCPL manual[0] explains the «monadic !» operator (section 2.11.3) as:

  2.11.3 MONADIC !

  The value or a monadic ! expression is the value of the storage cell whose address is the operand of the !. Thus @!E = !@E = E, (providing E is an expression of the class described in 2.11.2).

  Examples.

  !X := Y Stores the value of Y into the storage cell whose address is the value of X.

  P := !P Stores the value of the cell whose address is the value of P, as the new value of P.
The array indexing used the «V ! idx» syntax (section 2.13, «Vector application»).

So, the ! was a prefix operator for pointers, and it was an infix operator for array indexing.

In Richard's account of BCPL's evolution, he noted that on early hardware the exlamation mark was not easily available, and, therefore, he used a composite *( (i.e. a diagraph):

  «The star in *( was chosen because it was available … and it seemed appropriate for subscription since it was used as the indirection operator in the FAP assembly language on CTSS. Later, when the exclamation mark became available, *( was replaced by !( and exclamation mark became both a dyadic and monadic indirection operator».
So, in all likelihood, !X := Y became *(X := Y, eventually becoming *X = Y (in B and C) whilst retaining the exact and original semantics of the !.

[0] https://rabbit.eng.miami.edu/info/bcpl_reference_manual.pdf


The BCPL manual linked by you is not useful, as it describes a recent version of the language, which is irrelevant for the evolution of the B and C languages. A manual of BCPL from July 1967, predating B, can be found on the Web.

The use of the character "!" in BCPL is much later than the development of the B language from BCPL, in 1969.

The asterisk had 3 uses in BCPL, as the multiplication operator, as a marker for the opening bracket in array indexing, to compensate for the lack of different kinds of brackets for function evaluation and for array indexing, and as the escape character in character strings. For the last use the asterisk has been replaced by the backslash in C.

There was indeed a prefix indirection operator in BCPL, but it did not use any special character, because the available character set did not have any unused characters.

The BCPL parser was separate from the lexer, and it was possible for the end users to modify the lexer, in order to assign any locally available characters to the syntactic tokens.

So if a user had appropriate characters, they could have been assigned to indirection and address-of, but otherwise they were just written RV and LV, for right-hand-side value and left-hand-side value.

It is not known whether Ken Thompson had modified the BCPL lexer for his PDP computer, to use some special characters for operators like RV and LV.

In any case, he could not have used asterisk for indirection, because that would have conflicted with its other uses.

The use of asterisk for indirection in B became possible only after Ken Thompson has made many other changes and simplifications in comparison with BCPL, removing any parsing conflicts.

You are right that BCPL already had prefix operators for indirection and address-of, which was different from how this had been handled in CPL, but Martin Richards did not seem to have any reason for this choice and in BCPL this was a less obvious mistake, because it did not have structures.

On the other hand, Ken Thompson did want to have "*" as prefix, after introducing his increment and decrement operators, in order to need no parentheses for pre- and post-incrementation or decrementation of pointers, in the context where postfix operators were defined as having higher precedence than prefix.

Also in his case this was not yet an obvious mistake, because he had no structures and the programs written in B at that time did not use any complex data structures that would need correspondingly complex addressing expressions.

Only years later it became apparent that this was a bad choice, while the earlier choice of N. Wirth in Euler (January 1966; the first high-level language that handled pointers explicitly, with indirection and address-of operators) had been the right one. The high-level languages that had "references" before 1966 (the term "pointer" has been introduced in IBM PL/I, in July 1966), e.g. CPL and FORTRAN IV, handled them only implicitly.

Decades later, complex data structures became common while the manual optimization of incrementing/decrementing explicitly pointers for addressing arrays became a way of writing inefficient programs, which prevent the compiler from optimizing correctly the array accessing for the target CPU.

So the choice of Ken Thompson can be justified in its context from 1969, but in hindsight it has definitely been a very bad choice.


I take no issue with the acknowledgment of being on the losing side of a technical argument – provided evidence compels.

However, to be entirely candid, I have submitted two references and a direct quotation throughout the discourse in support of the position – each of which has been summarily dismissed with an appeal to some ostensibly «older, truer origin», presented without citation, without substantiation, and, most tellingly, without the rigour such a claim demands.

It is important to recall that during the formative years of programming language development, there were no formal standards, no governing design committees. Each compiled copy of a language – often passed around on a tape and locally altered, sometimes severely – became its own dialect, occasionally diverging to the point of incompatibility with its progenitor.

Therefore, may I ask that you provide specific and credible sources – ones that not only support your historical assertion, but also clarify the particular lineage, or flavour, of the language in question? Intellectual honesty demands no less – and rhetorical flourish is no substitute for evidence.


What you say is right, and it would have been less lazy for me to provide links to the documents that I have quoted.

On the other hand, I have provided all the information that is needed for anyone to find those documents through a Web search, in a few seconds.

I have the quoted documents, but it is not helpful to know from where they were downloaded a long time ago, because, unfortunately, the Internet URLs are not stable. So for links, I just have to search them again, like anyone else.

These documents can be found in many places.

For instance, searching "b language manual 1972" finds as the first link:

https://www.nokia.com/bell-labs/about/dennis-m-ritchie/kbman...

Searching "martin richards bcpl 1967" finds as the first link:

https://www.nokia.com/bell-labs/about/dennis-m-ritchie/bcpl....

Additional searching for CPL and BCPL language documents finds

https://archives.bodleian.ox.ac.uk/repositories/2/archival_o...

where there are a lot of early documents about the languages BCPL and CPL.

Searching for "Wirth Euler language 1966" finds the 2-part paper

https://dl.acm.org/doi/10.1145/365153.365162

https://dl.acm.org/doi/10.1145/365170.365202

There exists an earlier internal report about Euler from April 1965 at Stanford, before the publication of the language in CACM, where both indirection and address-of were prefix, like later in BCPL. However, before the publication in January 1966, indirection has been changed to be a postfix operator, choice that has been retained in the later languages of Wirth.

http://i.stanford.edu/pub/cstr/reports/cs/tr/65/20/CS-TR-65-...

The early IBM PL/I manuals are available at

http://bitsavers.org/pdf/ibm/360/pli/

Searching for "algol 68 reports" will find a lot of documents.

And so on, everything can be searched and found immediately.


A postfix "*" would be completely redundant since you can just use p[0] . Instead of *p++ you'd have (p++)[0] - still quite workable.


You're kidding, right? (p++)[0] returns the contents of (p) before the ++. Its hard to imagine a more confusing juxtaposition.


A dash instead of a dot would be so much more congruent with the way Latin script generally render compounded terms. And a reference/pointer (or even pin for short) is really nothing that much different compared to any other function/operator/method.

some·object-pin-pin-pin-transform is not harder to parse nor to interpret as human than (***some_object)->transform().


C's «static» and «auto» also come from PL/I. Even though «auto» has never been used in C, it has found its place in C++.

C also had a reserved keyword, «entry», which had never been used before eventually being relinquished from its keyword status when the standardisation of C began.


C23 also has reused auto as C++, although type inference is more limited.


That is indeed correct. Kernighan in his original book on C cited Algol 68 as a major influence.


> I'm pretty excited for this

Aside from historical interest, why are you excited for it?


Personally, I think the whole C tangent was a misstep and would love to see Algo 68 turn into Algo 26 or 27. I sort of like C and C++ and many other languages which came, but they have issues. I think Algo 68 could develop into something better than C++, it has some of the pieces already in place.

Admittedly, every language I really enjoy and get along with is one of those languages that produced little compared to the likes of C (APL, Tcl/Tk, Forth), and as a hobbyist I have no real stake in the game.


I wonder about what you think is wrong with C? C is essentially a much simplified subset of ALGOL68. So what is missing in C?


Proper strings and arrays for starters, instead of being pointers that the programmer is responsible for doing length housekeeping.


Arrays are not pointers and if you do not let them decay to one, they do preserve the length information.


They surely behave like one as soon as they leave local scope.

Kind of hard when passing them around as funcion parameters, and the static trick doesn't really work in a portable way.

Lets seen how far WG14 gets with cybersecurity laws with this kind of answers being analysed by SecDevOps and Infosec experts.


Then don't allow it to decay:

    void arr_fn(char (*arr)[15]) {
        enum { len = sizeof *arr }; printf("len of array: %d\n", len);
        printf("Got: %.*s\n", len, *arr);
    }
    void sptr_fn(char ptr[static 15]) { printf("Got: %s\n", ptr); }
    int main(void) {
        char array[15] = "Hello, World!";

        arr_fn(&array); sptr_fn(array); return 0;
    }
Using gcc (and similarly clang) removing the '15' from 'array', and allowing it to allocate it as 14 chars will result in warnings for both function calls.

One can hide that ptr to array behind a typedef to make it more readable:

    typedef char (Arr_15)[15];
    void arr_fn2(Arr_15 *arr) {
What do you mean by 'the static trick'? Is that what I have in sptr_fn()?


That is the static trick.

The issues as it stands today are:

- It is still a warning instead of an error, and we all know how many projects have endless lists of warnings

- Only GCC and clang issue such warning, if we want to improve C, security must be imposed to all implementations

https://c.godbolt.org/z/fEKzT4WfM


OK - assuming you're referring to 'char ptr[static 15]' as the 'static trick', then yeah - other compilers do not complain.

However the other form 'char (*arr)[15]' has always been available, and is complained of in other compilers.

I believe I remember using it in DOS based C-89 compilers back in the early 90s, possibly also in K&R (via lint) in the 80s.

NB: icc, msvc, mvc complain about the misuse of the traditional version if one adjusts your godbolt example.

Yes one has to build with warnings forcing errors, which takes a bit of work to achieve if the code has previously been built without that.


There isn't really much difference between "ignoring warnings" in C and careless use of "unsafe" or "unwrap" in Rust. Once you entered the realm of sloppiness, the programming language will not safe you.

The point is to what extend the tools for safe programming are available. C certainly has gaps, but not having proper arrays is not one of them.


    int arr[4];
    foo(arr);
We can look at this code like it passes an array by reference, but how to pass `arr` by value?


You can pass it by value when putting it into a struct. You can also pass a pointer to the array instead of letting it decay.

void foo(int (*arr)[4]);

int arr[4]; foo(&arr);


I think what C is missing is everything that people fall back onto clever use of pointers and macros to implement. Not that I think C should have all those things, Zig does a decent job of showing alternatives.


Yeah, but I meant specifically from ALGOL68.


I don't think C is missing anything from Algol 68, but, FLEX and slices would be nice, although Algol's slices are fairly limited but even its limited slices are better than what C offers. Algol 68 operators are amazing but I don't see them playing well with C.


Whilst I think that C has its place, my personal choice of Algol 26 or 27 would be CLU – a highly influential, yet little known and underrated Algol inspired language. CLU is also very approachable and pretty compact.


Consider exploring Ada 2022 as a capable successor to Algol. Its well supported in GCC and scales well from very small to very large projects. Some information is at https://learn.adacore.com/ and https://alire.ada.dev/


Is like to order a complementary question to the sibling one. What are you going to add to (/remove from?) Algol 68 to get Algol 26?


That task would be beyond my skills, as I said, I am just a hobbyist. I think it would be interesting to see what would result from going back to one of those early foundational languages and developing a modern language from it. With a language like Algol we don't have the decades of evolution (baggage) which are a big part of languages like C and C++ and trickle into the languages they inspired even if they are trying to remove that baggage. So, what would we get if we went back to the start and built a modern language off of Algol? What would that look like?


Wouldn't that be some form of Pascal?


I've actually been toying with writing an Algol 68 compiler myself for a while.

While I doubt I'll do any major development in it, I'll definitely have a play with it, just to revisit old memories and remind myself of its many innovations.


If PL/I was like a C++ of the time, Algol-68 was probably comparable to a Scala of the time. A number of mind-boggling ideas (for the time), complexity, an array of kitchen sinks.


It certainly has quite a reputation, but I suspect it has more to do with dense formalism that was quite unlike everything else. The language itself is actually surprisingly nice for its time, very orthogonal and composable.


Finally.


In most word processing software you just type "--", or "--- " to get an em-dash. It's not rocket science.


> Our AC plugs are, however, the safest design on the planet.

Not if you step on them with bare feet - those things are worse than LEGO. They could punch through a horse's hoof.


In 55 years I've never managed to do that, nor has anyone else I know. Plugs normally stay in the wall socket because they have a switch - each wall socket for general use must have a switch. The switch is quite hefty and very obviously off or on, with a red stripe. You get a satisfying audible and tactile click feedback when it is switched.

Recently a person brought in a laptop that had apparently been accidentally brushed off a desk, whilst closed, and had apparently fallen on an upturned plug. The plug had managed to hit the back of the screen, left quite a dent and spider cracking on the screen. The centre of the cracking did not match the dent ...

I'll have to do some trials but even if a plug is left on the ground, will it actually lie prongs upwards? I'll have to investigate lead torsion and all sorts of effects. Its on the to do list but not very high.


Don't leave them unplugged. The standard requires all modern sockets to have switches, so there is no reason to have the plugs lying around on the floor.


I’ve never had an experience in any house or office where there’s been enough sockets to leave everything plugged.


I've never had an experience in any house or office where anything has ever been unplugged other than to put it away (a kitchen appliance that doesn't need to live on a counter, or a hair dryer, for example).

Buy a fused extension cord with more plugs, you have now turned one socket into 4, 6, or 8 sockets. You can even get some that have USB built-in, so you don't use a socket up for a phone or tablet charger. They're not even very expensive.

And in an office, I'm pretty sure all equipment (computers, lights, controls for adjustable desks if you have them), are meant to remain permanently plugged in anyway in a properly installed desk setup. What is going on in your office where you're choosing what is plugged in and what isn't, constantly? And why can't your office manager spring £20 for an extension cord with multiple sockets?


I've never stepped on a plug myself, so I agree it's not a major problem.

However, some older houses in the UK have far fewer sockets than more modern properties - sometimes only one or two per room.

And sure, if you need to use a hairdryer and a hair straightener a person with an orderly lifestyle might return them both to a cupboard afterwards - but some people don't mind clutter and just leave them wherever.

When it comes to multiway extension leads - people in the UK are sometimes told it's bad to "overload" sockets but have only a vague understanding of what that means, so some people are reluctant to use them.


"When it comes to multiway extension leads - people in the UK are sometimes told it's bad to "overload" sockets but have only a vague understanding of what that means, so some people are reluctant to use them."

To be fair, most people work on the assumption that if the consumer unit doesn't complain, then it is fair game. They are relying on modern standards, which nowadays is quite reasonable. I suppose it is good that we can nowadays rely on standards.

However, I have lived in a couple of houses with fuse wire boards, one of which the previous occupants put in a nail for a circuit that kept burning out.

Good practice is to put a low rated fuse - eg 5A (red) into extension leads for most devices. A tuppence part is easy and cheap to replace but if a few devices not involved with room heating/cooling blow a 5A fuse, you need to investigate. A hair dryer, for example, should not blow a 5A fuse.


Hair dryer and straightener would both be on a counter, right? No stepping issue there. And the same for appliance switching.

The only thing I plug in at ground level that isn't semi-permanent is a vacuum. No plugs are left lying around all day.


they are also really tough to swallow


That’s a nice reminder that they should be respected. Not left lying around.


why are you stepping on them?


Sometimes you've just got to put your foot down.


because sometimes you unplug it and leave it around. unless you live like a king sometimes there is 2 sockets and you have 5 devices to plug at different times. european and other ones will be on the side so stepping on it is no problem but uk ones will be the pointy end up


> european and other ones will be on the side

There's almost a dozen different plug/socket types used in Europe though: https://www.plugsocketmuseum.nl/Overview.html

I will say, you definitely can tread on a German "Schuhko" plug (if it has a flat face) just like a UK one.


Live like a king!

Are these prices beyond your means?

https://www.argos.co.uk/search/extension-lead/


I have one too! 3 out of 6 plugs stopped working! I have 2 plugs outside of mini kitchen area and I have laptop, phone charger, camera charger, 2 ikea lamps, .......

there are no uk plugs here so I'm not complaining:)

if keeping everything plugged works for you, awesome!


You're leaving lengths of strong flexible wire lying around places where you walk and are worried you might get hurt? Uh, yeah!


I don't worry about it:) I'm not in UK


It's a patent on a physical process for measuring blood oxygen. It's not a software patent.


This has been in process for over a year. It's not a sudden thing. The press you saw was all part of a campaign to push the idea.


Wow. That's surprisingly lame.


The NT kernel dates back to 1993. Computers didn’t exceed 64 logical processors per system until around 2014. And doing it back then required a ridiculously expensive server with 8 Intel CPUs.

The technical decision Microsoft made initially worked well for over two decades. I don’t think it was lame; I believe it was a solid choice back then.


Linux had many similar restrictions in its lifetime; it just has a different compatibility philosophy that allowed it to break all the relevant ABIs. Most recently, dual-socket 192-core Ampere systems were running into a hardcoded 256-processor limit. https://www.tomshardware.com/pc-components/cpus/yes-you-can-...


Tom's hardware is mistaken in their reporting. That's raisng the limit without using CPUMASK_OFFSTACK. The kernel already supported thousands of cores with CPUMASK_OFFSTACK and has at least since the 2.6.x days.


> Computers didn’t exceed 64 logical processors per system until around 2014.

Server systems were available with that since at least the late 90s. Server systems with >10 CPUs were already available in the mid-90s. By the early-to-mid 90s it was pretty obvious that was only going to increase and that the 64-CPU limit was going to be a problem down the line.

That said, development of NT started in 1988, and it may have been less obvious then.


"Server systems" but not server systems that Microsoft targeted. NT4 Enterprise Server (1996) only supported up to 8 sockets (some companies wrote their own HAL to exceed that limit). And 8 sockets was 8 threads with no NUMA back then, not something that would have been an issue for the purposes of this discussion.


Microsoft was absolutely wanting to target large servers at the time. They were actively trying to kill off the vendor unices in the 90s.


They successfully killed off vendor unicies in the 90s, but that was thanks to cheap x86.


That was what stuck, but supporting the big servers was also part of their multifaceted strategy. That's why the alpha, itanium, powerpc, and mips ports existed.


The Sun E10K (up to 64 physical processors) came out in 1997.

(Now, NT for Sparc never actually became a thing, but it was certainly on Microsoft's radar at one point)


SGI Origin did by 1996.

Though MS ported NT to a number of systems (mips, alpha, ppc) it wasn’t able to play in the very big leagues until later.

I agree it was a reasonable choice at the time. Few were getting mileage out of that many CPUs back then.


That was actually the DEC team from what I understand, Microsoft just hired all of their OS engineers when they collapsed


Dave Cutler left DEC in 1988 and started working on WINNT at MS, well before the collapse.


I mean, x86 didn't, but other systems had been exceeding 64 cores since the late 90s.

And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.


> And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.

If that were the case the above system wouldn't have needed 8 sockets. With NUMA systems the app needs to be scheduling group aware anyways. The difference here really appears when you have a single socket with more than 64 hardware threads, which took until ~2019 for x86.


Why would an application need to be NUMA aware on Linux? Most software I've ever written or looked at has no concept of NUMA. It works just fine.


The same reasons it would on macOS or Windows, most people just aren't writing software which needs to worry about having a single process running many hundreds of threads across 8 sockets efficiently so it's fine to not be NUMA aware. It's not that it won't run at all, a multi-socket system is still a superset of a single socket system, just it will run much more poorly than it could in such scenarios.

The only difference with Windows is a single processor group cannot contain more than 64 cores. This is why 7-Zip needed to add processor group support - even though a 96 core Threadripper represents as a single NUMA node the software has to request assignment to 2x48 processor groups, the same as if it were 2 NUMA nodes with 48 cores each, because of the KAFFINITY limitation.

Examples of common NUMA aware Linux applications are SAP Hana and Oracle RDBMS. On multi-socket systems it can often be helpful to run postgres and such via https://linux.die.net/man/8/numactl too, even if you're not quite the scale you need full NUMA awareness in the DB. You generally also want hypervisors to pass the correct NUMA topologies to guests as well. E.g. if you have a KVM guest with 80 cores assigned on a 2x64 Epyc host setup then you want to set the guest topology to something like 2x40 cores or it'll run like crap because the guest is sees it can schedule one way but reality is another.


There were single image systems with hundreds of cores in the late 90s and thousands of cores in the early 2000s.

I absolutely stand by the fact that Intel and AMD didn't pursue high core count systems until that point because they were so focused on single core perf, in part because Windows didn't support high core counts. The end of Denmark scing forced their hand and Microsoft's processor group hack.


AMD and Intel were focused on single core performance, because personal desktop computing was the bigger business until around mid to late 2000s.

Single core performance is really important for client computing.


They were absolutely interested in the server market as well.


Do you have anything to say regarding NUMA for the 90s core counts though? As I said, it's not enough that there were a lot of cores - they have to be monolithically scheduled to matter. The largest UMA design I can recall was the CS6400 in 1993, to go past that they started to introduce NUMA designs.


Windows didn't handle numa either until they created processor groups, and there's all sorts reasons why you'd want to run a process (particularly on Windows which encourages single process high thread count software archs) that spans numa nodes. It's really not that big if a deal for a lot of workloads where your working set fits just fine in cache, or you take the high hatdware thread count approach of just having enough contexts in flight that you can absorb the extra memory latency in exchange for higher throughput.


3.1 (1993) - KAFFINITY bitmask

5.0 (1999) - NUMA scheduling

6.1 (2009) - Processor Groups to have the KAFFINITY limit be per NUMA node

Xeon E7-8800 (2011) - An x86 system exceeding 64 total cores is possible (10x8 -> requires Processor Groups)

Epyc 9004 (2022) - KAFFINITY has created an artificial limit for x86 where you need to split groups more granular than NUMA

If x86 had actually hit a KAFFINITY wall then the E7-8800 even would have occured years before processor groups because >8 core CPUs are desirable regardless if you can stick 8 in a single box.

The story is really a bit reverse from the claim: NT in the 90s supported architectures which could scale past the KAFFINITY limit. NT in the late 2000s supported scaling x86 but it wouldn't have mattered until the 2010s. Ultimately KAFFINITY wasn't an annoyance until the 2020s.


> other systems had been exceeding 64 cores since the late 90s.

Windows didn’t run on these other systems, why would Microsoft care about them?

> x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it

For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.


> Windows didn’t run on these other systems, why would Microsoft care about them?

Because it was clear that high core count, single system image platforms were a viable server architecture, and NT was vying for the entire server space, intending to kill off the vendor Unices.

. For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.

Linux wasn't the only OS. Solaris and AIX were NT's competitors too back then, and supported higher core counts.


Windows NT was originally intended to be multi-platform.


NT was and continues to be multi-platform.

That doesn't mean every platform was or would have been profitable. x86 became 'good enough' to run your mail or web server, it doomed other architectures (and commonly OSes) as the cost of x86 was vastly lower than the Alphas, PowerPCs, and so on.


Cockies are the pranksters of the bird world. They're smart and they think it's hilarious to mess with each other and anyone else. They also tear everything to pieces. So it's no surprise really that if any bird worked out how to operate a drinking fountain it'd be these hilarious little jerks.


I was visiting a place that takes in rescue animals, in this case they had a lot of birds.

In their typical speech to people about NOT keeping birds as pets they described some of the birds as "highly curious, the maturity of a human 5 year old, with an intense desire to be destructive".


My wife always joke about how parrots sound like a fun pet until you consider the phrase "Flying eternal toddlers, that cannot be diapered or potty-trained, with can-opener mouths."


On top of that, they have one tool, and it's a pair of boltcutters you can't take away. And the most clever of them have a good chance to outlive their owners.


30 to 80 years in captivity! Factors: species and level of care.

20 to 40 in the wild.

Good sense of humour though.


There's one at a wildlife sanctuary in Tasmania reported to be 110 or so ("Fred", Bonorong Wildlife Sanctuary). Original owner is long dead, obviously.


And the means to achieve that destruction. Cockatoos are like flying bolt cutters.


I aspire to one day befriend a local murder of crows. Not to keep as pets or to make dependent on me, but maybe to bribe to clean up trash or steal quarters for me... or to defend my honor should the need arise.


> hilarious little jerks.

We had a galah chewing our hosepipe the other day. I pointed and said "oi!" and the little scamp stopped, straightened up, looked me right in the eye and ... did it again.

Oh and not to forget the kookas. I heard a pop and noise like water a few weeks ago, and ran into our living room. Outside the main window there's that hose reel mounted on the wall that was spraying freely against the glass. A kookaburra had somehow pulled the hozelock end off and was taking a shower.


The kookaburras here have a reputation for taking snags right off a burning BBQ without apparently hurting themselves.


I will never forget watching a kookaburra swoop down as my grandmother went to take a bite out of a bacon sandwich, and stealing a piece of bacon out of it without touching her or the bread. It then sat on a branch whacking the bacon against it to "kill it" before eating it.


Same with me, but I was camping as a kid. One took the snag out of my mates bread just as he was about to bite it. It made sure it was dead by hitting it on the tree it landed in.


It seems a standard childhood memory! I had a chicken and salad sandwich downgraded to a salad sandwich while I held it my hands as a child. Couple of decades later, almost identical thing happened to my own kid.


I’m only just across the ditch and needed to search this.

Snag = sausage.


They stole my bacon the same way, serves me right for not sharing I guess


If they are the pranksters, I wonder what that makes the Kea. I think they are counted as smarter, they definitely enjoy a bit of malicious fun.


The most accurate representation of "Chaotic Neutral" - the cheeky bastards love stealing ANYTHING, and when there's nothing to steal they'll start ripping the rubber off your car door seals (or windshield wipers).

They are amazing birds, very deserving of the name "Clown of the Mountains".


I'll never understand why we New Zealanders chose a flightless defenseless bird as our national bird when we have so many other great candidates.


I am now obliged to mention bird of the year pūteketeke!


Thank you, John Oliver.


Kiwis are very unique and distinctive looking, so it makes sense to an outsider like me. Keas are cool but they kind of just look like parrots.


Not so defenseless


I liked the Kea messing with traffic cones and redirecting traffic, apparently slowing cars and getting fed.

https://www.youtube.com/watch?v=yZ4Y7svFgnQ


Weka can be a lot of fun too, I saw a pack of them opening someone's backpack zipper to find out what's inside.


I was hiking and had a Kea flapping its wings on the ground to get our attention while his friend was going through our backpacks.


Ah, team work.


I saw a seagull sneak up to and scream at a guy to make him drop his fish and chips and all his seagull buddies swooped in and took it.


Seagulls, magpie and ibis (im not being fun or joking here) have evolved to exhibit cooperative traits and behaviours to get food, including tricking, diverting, cooperating and most annoying literally staunching people.

I was having a burrito on manly wharf a long while back, a seagull just lands on the table and death stares me...i felt uncomfortable and moved, because i know they will try and take my food off me!


I haven't ever seen Brisbane's beloved bin chickens (ibis) cooperating, but they're pretty good at getting into any bin to scavenge food.

Cockatoos are worse and will flip the lid of a wheelie bin if in the mood. Crows will as well if you overfill and the lid is not shut properly.


I saw an ibis and magpie work on opening a macdonalds bin, take out the black rubbish bag, tear it, splay its contents and fish for paper macdonalds bags!


I looked up the bird..

They are smart!

https://www.youtube.com/watch?v=7W7hEUGtv4U


Local legend has it Kea work it groups. Team work

One group will entertain the tourists (in mountain huts in the back country) by putting on ammusing displays of acrobatics and hijinks

The other team use razor sharp claws and beaks to open thir packs and get to all the interesting stuff inside


Keas are gremlins but real.


When I lived in Australia we had a wooden full length porch (elevated), and where we lived in the hills outside Melbourne we could easily have 20-30 cockatoos hang out on it in the morning. They were mercifully not loud, but they absolutely destroyed the deck rails, and we had to replace them with heavier duty industrial plastic deck.


Caiques and Blue Hyacinths are definitely more pranksters, Cockatoos are just plain psychos.


Or gangsters. We had a bird feeder, which we occasionally let run dry. A cockatoo got pissed with this, and concocted a scheme. When the feeder was empty he sat on the outside fridge and screeched. Once he got your attention, he made sure he was in full view and started destroying things . He only stopped when you put out more feed.

Amused by this I mentioned it at a neighborhood BBQ, and was greeted by a chorus of "oh yes, that happens at my place too". The guy holding the BBQ held up his BBQ tools and said: "See, brand new, this is the 3rd set". It was a neighborhood wide protection racket run by one bird.


Amazing. Cockatoos really are gangsters.


Indeed. My father spent a lot of time bellowing at cockatoos that’d land in his fruit trees and tear them to pieces. He’d storm about and wave a broom at them until they took off. Classic old man yelling at clouds.

When he was on the other side of the house in the garage, they’d take fruit from the trees and drop them on the sloping driveway so they rolled down into the garage. Come play old fella.


The Bellmac-32 was pretty amazing for its time - yet I note that the article fails to mention the immense debt that it owes to the VAX-11/780 architecture, which preceded it by three years.

The VAX was a 32-bit CPU with a two stage pipeline which introduced modern demand paged virtual memory. It was also the dominant platform for C and Unix by the time the Bellmac-32 was released.

The Bellmac-32 was a 32-bit CPU with a two stage pipeline and demand paged virtual memory very like the VAX's, which ran C and Unix. It's no mystery where it was getting a lot of its inspiration. I think the article makes it sound like these features were more original than they were.

Where the Bellmac-32 was impressive is in their success in implementing the latest features in CMOS, when the VAX was languishing in the supermini world of discrete logic. Ultimately the Bellmax-32 was a step in the right direction, and the VAX line ended up adopting LSI too slowly and became obsolete.


You might want to be more specific by what you mean by "modern", because there were certainly machines with demand-paged virtual memory before the VAX. It was introduced on the Manchester Atlas in 1962; manufacturers that shipped the feature included IBM (on the 360/67 and all but the earliest machines in the 370 line), Honeywell (6180), and, well... DEC (later PDP-10 models, preceding the VAX).


My impression of the VAX is, regardless of whether it was absolutely first at anything, it was early to have 32-bit addresses, 32-bit registers and virtual memory as we know it. You could say machines like 68k, the 80386, SPARC, ARM and such all derived from it.

There were just a lot of them. My high school had a VAX-11/730 which was a small machine you don't hear much about today. It replaced the PDP-8 that my high school had when I was in elementary school and visiting to use that machine. Using the VAX was a lot like using a Unix machine although the OS was VMS.

In southern NH in the late 1970s through mid 1980s I saw tons of DEC minicomputers, not least because Digital was based in Massachusetts next door and was selling lots to the education market. I probably saw 10 DECs for every IBM, Prime or other mini or micro.


In all those respects, the VAX was just following on to the IBM 360/67 and its S/370 successors -- they all had a register file of 32-bit general purpose registers which could be used to index byte-addressed virtual memory. It wasn't exactly an IBM knockoff -- there were a bunch of those, too (e.g., Amdahl's) -- but the influence is extremely clear.


Period might be the best word. Contemporary is also a contender I thought of first, before disqualifying it for implying 'modern'.


Also Prime as well in the 70s pre-VAX.


The article says the Bellmac-32 was single-cycle CISC. The VAX was very CISC and very definitely not single cycle.

It would have been good to know more about why the chip failed. There's a mention of NCR, who had their own NCR/32 chips, which leaned more to emulations of the System/370. So perhaps it was orders from management and not so much a technical failure.


I don't think it was single-cycle, someone mentions a STRCPY instruction that would be quite hard to do single-cycle....


Single-cycle doesn't mean that everything is single cycle, but that the simple basic instructions are. As a rule of thumb, if you can add two registers together in a single cycle, it's a single-cycle architecture.


> introduced modern demand paged virtual memory

Didn't Multics, Project Genie, and TENEX have demand paging long before the VAX?


I should have said "supermini". While mainframes had tried a variety of virtual memory schemes, the VAX was the first supermini to adopt the style of demand paged flat address space virtual memory which pretty much set the style for all CPUs since then. A lot of VAX features, like the protection rings etc., were copied to the 80386 and its successors.


There was also the Nord-5, which beat the VAX by another couple of years as a 32-bit minicomputer.


Yeah, 1972 - "Nord-5 was Norsk Data's first 32-bit machine and was claimed to be the first 32-bit minicomputer". The Wikipedia record: https://en.wikipedia.org/wiki/Nord-5


Also Interdata with the 7/32 and 8/32.


Surely he should receive a prison sentence for that alone?


With the referral to the US attorney’s office, he actually might.

Consider how painful that is going to be for Apple, and Roman, with how the current administration is abusing the DOJ.

The repercussions of this could be huge.


Well in this case it wouldn’t be abuse. I hope they do convict him if he’s guilty of perjury. It will set an example for the other weasels. “Percentage of executives of Fortune 500 companies who do time for real crimes they committed” should be a big KPI for the DOJ in my book.


I really hope you're right, but I think we're more likely to see the case swept under the rug in exchange for totally voluntary donations from Apple to various organisations with 'Trump' in their name.


That's one vision, but it's probably not the most likely one. People like privately owning cars, and as long as they're more convenient than hiring taxis it'll probably stay that way.

Here's another vision of the future - gradually everyone's cars become self-driving, and now cars are more accessible to a wider range of people. 30% of the population currently can't drive due to age or disability, but if cars drive themselves the elderly, disabled, and even children can now own and operate vehicles. And now you have 30% more cars on an already congested road system. That should be enough to make traffic jams the norm everywhere.

But in case that wasn't bad enough, consider this - now people can do other things while they travel, because they don't have to be driving. So, in turn, they can live further and further away from their workplaces in cheaper, larger houses and do more of their work on the go. And while they do this they're spending more time on the roads, and - you guessed it - causing more congestion.

And because parking will always be expensive and hard to find in busy city centers, people will set their cars to loiter while they visit, rather than parking. Just going round and round while their owners shop. Causing - you guessed it - even more congestion.

TL;DR - the most likely result of autonomous vehicles is out of control congestion.


When teleportation becomes a thing society will force supercommuters to teleport in from farther and farther-out to maximize shareholder value while remaining in compliance with their respective companies' hybrid work policies. That you arguably die and are recreated every time you pass through the portal will finally end all discussions around whether your life is worth more than productivity.


Yeah, I'd expect many cities passing laws to forbid empty driverless cars on the road unless they're a taxi.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: