Hacker Newsnew | past | comments | ask | show | jobs | submit | simne's commentslogin

I hear about one specific computer of 360 era, on which was dangerous to touch two keyboards simultaneously, because there was high voltage between.

When one guy asked "why not?", consultant answered as IBM - "we don't think anybody ever will need two keyboards to work".

In reality, two keyboards was convenient for debugging, because you could control two terminals, and now it is standard debugging practice.


If possible for exact this plane, could make software update just as routine procedure.

But as I hear, air transporters could buy planes in different configurations, so for example, Emirates airlines, or Lufthansa always buy planes with all features included, but small Asian airlines could buy limited configuration (even without some safety indicators).

So for Emirates or Lufthansa, will need one empty flight to home airport, but for small airline will need to flight to some large maintenance base (or to factory base) and wait in queue there (you could find in internet images of Boeing factory base with lot of grounded 737-MAXes few years ago).

So for Emirates or Lufthansa will be minimal impact to flights (just like replacement of bus), but for small airlines things could be much worse.


> how do you avoid the voting circuit becoming a single point of failure

They do not. Just make voting circuit much more reliable than computing blocks.

As example, computing block could be CMOS, but voting circuit made from discrete components, which are just too large to be sensitive to particles.

Unfortunately, discrete components are sensitive to overall exposure (more than nm scale transistors), because large square gather more events and suffered by diffusion.

Other example from aviation world - many planes still have mechanic connection of steering wheel to control surfaces, because mechanic connection considered ideally reliable. Unfortunately, at least one catastrophe happen because one pilot blocked his wheel and other cannot overcome this block.

BTW weird fact, modern planes don't have rod physically connected to engine, because engine have it's own computer, which emulate behavior of old piston carburetor, and on Boeing emulating stick have electronic actuator, so it automatically placed in position, corresponding to actual engine mode, but Airbus don't have such actuator.

I want to say - especially big planes (and planes overall), are weird mix of very conservative inherited mechanisms and new technologies.


Electronics in high-radiation environments benefit from a large feature size with regard to SEU reduction, but you're correct that the larger parts degrade faster in such environments, so they've created "rad-hard" components to mitigate that issue.

https://en.wikipedia.org/wiki/Radiation_hardening

It's interesting to me that triple-voting wasn't as necessary on the older (rad-hard) processors. Every foundry in the world is steering toward CPUs with smaller and smaller feature sizes, because they are faster and consume less power, but the (very small) market for space-based processors wants large feature sizes. Because those aren't available anymore, TMR is the work-around.

https://en.wikipedia.org/wiki/IBM_RAD6000

https://en.wikipedia.org/wiki/RAD750

Most modern space processing systems use a combination of rad-hard CPUs and TMR.


Large scale DBA and Ops. For them typical daily solving tasks like "what is more reliable - two RAID-0 in stripe or two stripes in one RAID-0" - mathematics thinking gives exact answer.


Well, I seen one suggestion for probable nearest future of AI, to stop on GPT-4 level, but distill and use optimizations like switch to FP8 for faster speed.

So basically idea, model with very same capabilities, but distilled and optimized to have less size and faster inference.

I cannot promise 100x speedup, but I think 10x is very real.


Your opinion is really interest and important, but it lack evidence.

What I mean, last years, I constantly gather information, on how countries grown their hackers community and programming industry, and at first I seen obvious things, but now with more info, things looking more complicated.

As example, I seen Eastern Europe and exUSSR, grown their hc and IT, mostly with cheap unlicensed local clones of PDP and IBM machines, and as I see, we in exUSSR know about Commodore/Atari and about consoles (I mean pre xBox), and we somewhere lack their taste, but we are already mature and even more or less competitive.

From other side, Japan have rich history of consoles and machines with extended graphic and sound capabilities (MSX, PC-98), and they have good achievements in enterprise machines and in hardware, but I don't see Japanese google, or Japanese facebook, or Japanese Oracle (databases). And I have not answer, how this happen.

What impress me even more - few years ago I got info, GDP in Asia/Africa and accessibility of compute devices grown very fast after Android appearance (I'm not sure if GDP connected, but smartphone became universal computing device with Android), so I see grow of games sales to Asia/Africa, why I notice - because their very specific culture, so gamedev have to made significant changes to game to enter their market. And this shocked me, as I release, Japanese just avoid to enter these markets, stay focused on their internal market and on West.

And BTW other side - East Europe and exUSSR was so poor, so having moderate access to really good Western computers, huge share of economy made accounting with pen and paper and abacus, some entities in middle 2000s. As I know, Japan have access to computers nearly as Americans, from at least middle 1980s, but looks like they lost something when most people switched from manuscript to keyboard.


For about, what could be good machine for hacking, I have ideas.

First, probably, 8-bitness is unavoidable, because all those current Raspberries, are relatively powerful computers, even usually could install Android there, so you understand what I want to say :)

- Machine for hackers should be limited, on RAM, on CPU speed, sure, with limited screen resolution, and limited sound, because otherwise, on some point, will become race of wallets, as high quality picture and sound are usually expensive.

From other side, graphics should not be too primitive, looks like good compromise are C64 or Atari-65 (not many static objects on background, but with hardware accelerated sprites).

Some time before, I thought, the best balance for hackers machine is C64, until I read some details about Enterprise-128 (or 64).

What differs E128(64) - their absolute unique video-adapter, capable to show few resolutions on one screen. Imagine classic arcade game - for them very usual to have on top part of screen some static background and some indicators of achievements, and whole game process running on lower part of screen. So in good design, we should somehow make top part with minimal possible efforts, but focus on lower part; and E128/64 is most close hardware to this.

For about real implementations, I'm impressed with esp-32 rainbow, but unfortunately, it is ZX Spectrum simulator, and I think it is impossible to do on those hardware C64 or E128. When time will accept, I'll try other cheap hardware platforms, as I hear, RP2040 could run separate C64 chips (but nobody have done whole C64 on multiple RP2040s), so will be multicore machine, but it's ok.


> If Windows didn't have to support a gigantic universe of old-yet-critical software

Interest point, but not exact.

MS have close to monopoly state and is very close to formal margins, where regulators must issue regulative measures.

As I know, nearly all companies achieved so huge share of US market, got warning from regulators, and most immediately hit brakes to limit their share and avoid measures. Examples from past are IBM, Commodore/Atari, etc.

But what interest, using some obvious things, like just strip API to limit share, considered by regulators as offense, so subject must not do direct things to limit his product, and only could slow innovations.


They will not, because need to save at least weak competition, or anti-trust regulators will use very high taxes against Nvidia.

This is reason, why Intel all previous decades saved tiny stripe for competitors (sure, AMD , but also like Cyrix or Sys), but immediately hit brakes, when some competitor becomes too competitive - to show regulators, that market is still competitive, is not just monopoly.

The size of stripe for outsiders is not right parameter here, but more important outsiders will not show bright products in most important niches.

So idea, Arc will not die fast, but it will constantly lag, to be only second or third.

How one could cut wings to GPU? Well first, delay top products, for example installing slow RAM and use too high temperature margins, so chip will run on slower frequency than could.

Second, as I hear, Arc drivers still not ideal and some games don't run smooth.

Third, cut all long term perspective initiatives, like WebGPU.

ps Other examples, you may seen strange behavior of IBM, Commodore/Atari, when they avoid to implement some very obvious things, and that is - they visited by regulators, and warned about approaching of formal margin, and after that visit, hit brakes, to limit their products, to avoid become next ATT.


War games, was one of the first Western movies appeared in USSR at time of "global discharge" (or better translation global easing), it was so fresh wind.

I have seen movies with Redford later, but unfortunately, I just now got info about connection of Redford with War games.

RIP Great man.


The main question, current smartphones are nearly 100% camera-phones, and people just used to camera-phone world and don't want anything else.

But unfortunately, tiny camera is hardest thing and it is not coincidence, that nearly all whales of smartphone industry regularly show outstanding camera on their presentations.

Other things except camera are mostly accessible for Linux community.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: