Hacker Newsnew | past | comments | ask | show | jobs | submit | peter_d_sherman's commentslogin

(Comedy writing mode ON: )

"We need to flatten the curve..."

(Comedy writing mode OFF: )

You know, to re-quote the powers-that-be and the mainstream news media...

What, no takers?

You know, "flatten the curve... of population increase?" -- what, still not funny?

Hey, I'm just re-quoting what other people said... (a whole lot of people, incidentally!) but in the context of the article, above!

What, still no takers?

You people have no sense of (dark, very dark, let's be completely honest about that!) humor!

:-)


>"The abstraction tower

Here’s the part that makes me laugh, darkly.

I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.

They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.

But sure. AI is the moment they lost track of what’s happening.

The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack.

AI is just the layer that made the pretence impossible to maintain."

Absolutely brilliant writing!

Heck -- absolutely brilliant communicating! (Which is really what great writing is all about!)

You definitely get it!

Some other people here on HN do too, yours truly included in that bunch...

Anyway, stellar writing!

Related:

https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...

https://en.wikipedia.org/wiki/Tower_of_Babel

https://en.wikipedia.org/wiki/Abstraction_(computer_science)

https://en.wikipedia.org/wiki/Abstraction

https://ecommons.cornell.edu/entities/publication/3e2850f6-c...


>"But NixOS isn't the only declarative distro out there. In fact GNU forked Nix fairly early and made their own spin called Guix, whose big innovation is that, instead of using the unwieldy Nix-language, it uses Scheme. Specifically Guile Scheme..."

I'd be curious if a list exists of all declarative Linux distros out there, along with the configuration language (Nix, Scheme, etc.)

I'd also be curious as to how easy it would be to convert Scheme to the Nix language or vice-versa, in other words, it seems to me that there might be a "parent language" (for lack of a better term) out there for all lisplike and functional programming language (a subset of Haskell, F#, or some other functional programming language perhaps) that sort of might act as an intermediary conversion step (again, for lack of a better term!) between one functional or lisplike programming language and another...

Probably unrelated (but maybe somewhat related!) -- consider Pandoc... Pandoc is a Haskell program that basically uses a document tree structure to convert between one type of document format and another... maybe in terms of programming languages you'd call that an AST, an Abstract Syntax Tree... so maybe there's some kind of simplified AST (or something like that) out there that works as the base tree for all functional and lisp-like programming language (yes, lisp/lisplikes sort of preserve its/their own tree; their own AST -- via their intrinsic data structure, and that would seem to be true about functional programming languages too... so what is the base tree/AST of all of these, that all languages in this family can "map on to" (for lack of better terminology), that could be used (with AI / LLM's) as an "Intermediary Language" or "Intermediary Data Structure" (choose your terminology) to allow easily converting between one and the other?

Anyway, if we had that or something like that, then Nix configurations could (in theory) be easily converted to Guix, and vice-versa, automatically, as could any other Linux configured by a functional and/or lisplike language...

That, and I found the article very interesting!

I may have to try Guix in the future!


I was thinking the same thing. Since scheme is in the Lisp family, it should be straightforward to modernize it to something like Clojure, which is similar to Haskell as you mentioned. Being functional, but from a Java/Lisp ecosystem that might be more viable in the typical modern software environment.


Wouldn't that just make it harder to bootstrap an OS, needing to start with JVM and all...


Not necessarily harder, just add 'jdk25' to home packages. If you really don't want to use JVM you can use Babashka to start clojure and use it like you would bash.


Well, it makes it much harder to build the system from a simple assembler.

Guix is AFAIK the only distro with a well-paved bootstrap path from a simple assembler to a fully-working distro[0]. Adding the JVM or even GraalVM (which is what Babashka is based on) makes the bootstrapping that much harder.

[0]: https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-...


Is there a need to "modernise" it?


With Lisp you already have an AST.


>I’m six months behind on rent, just managed to buy time after my first eviction notice, and I’m trying to get back into building without constantly worrying about when the next one shows up.

Why not create a broker website between people who are getting eviction notices and Lawyers who specifically help people who are getting eviction notices?

That is, use what you are...

Or rather (phrased another way),

use the set of circumstances you are in, to turn around the set of circumstances that you're in.

It may sound meta, but if you individually are having a problem -- then so are a ton of other people as well!

If solving that problem has value to you, then such a solution is worth money.

If many people are also having that problem, then solutions to that problem are also worth money to them!

Sure, a legal solution, for example, finding an appropriate Lawyer to extend the point in time before eviction for say, 1-2 months (or whatever can be done) may be a suboptimal partial solution (in an ideal world, you'd like your rent to be free forever, as would everyone else), but the thing is,

even suboptimal partial solutions are worth money, if only a little bit of money...

Phrased another way,

there's money to be made by acting as a broker between people with eviction problems and the subset of Lawyers who specialize in that field, who could potentially ameliorate that problem if only a tiny amount, if only a little bit...

There's also money to be made in books and online reports... "What to do if you get an eviction notice".

Which such a book or report might not be worth the price of rent (obviously if someone had the rent money they'd pay it, problem solved), that information may be worth $19.95, or something in that ballpark...

Scaled across thousands of people with the same problem -- and we're looking at some decent money!

It probably won't make you rich or anything... but (and this is going to sound "evil" -- but it is not intentionally so!) it might make you enough money to pay your rent! :-)

use what you are

use the set of circumstances you are in, to turn around the set of circumstances that you're in.

Education, Knowledge, and Experience are everything...

Money, if it exists, if it exists at all, exists relative to, as an effect of these things...

Do you know everything possible about every single Landlord/Tenant Law and every single possible way to resolve an eviction notice?

If not... then I'd suggest that you are in one hell of an opportunity to learn everything you can about the matter!

Expertise (in any subject matter) translates to being sought, to being paid (sometimes very highly!) by others in return for advice, in return for knowledge, in return for communicated experience...

Money will naturally follow you with this learning once you have it, and once you monetize it -- although this will probably not happen today or tomorrow -- but it will in the future, if you can see the opportunity and capture it!

use what you are

use the set of circumstances you are in, to turn around the set of circumstances that you're in

Wishing you well in this... experience!

(You know, for lack of a better term! :-) )


>This started as a personal project because I wanted a clean, searchable dataset of startups across regions without jumping between multiple sources or dealing with noise I didn't want :).

I love this idea!

Something like that really needs to be done, and you've stepped up to the plate to begin that journey of putting all of that together!

A list of ALL startups, in one place, would really be great!

One question/caveat though -- how do you determine / how would you determine if a startup is no longer in startup mode?

That is, if the startup has become a big business, if the startup has been acquired, if the startup has failed, etc., etc.?

I guess (if the correct data wasn't present or unavailable or hard to parse, etc.) you could simply take startups off the last after a fixed time period, like maybe 12 months, 24 months, <?> months, ?.

Or, maybe add a retrieval date and source...

Two extra fields for your database... the date when it was spidered/sourced/parsed/found/uploaded/etc, and the source URL (or URL's...).

Then you could keep all of the data for all time... just let your users sort/filter on that retrieval date, for "freshness" of data, relative to their needs...

Anyway, looks great so far!

Great work!


Hey Peter, Thanks! I really appreciate the thoughtful feedback and your time.

> how do you determine / how would you determine if a startup is no longer in startup mode?

It is a challenge as startups transition all the time in different ways - funding rounds, IPOs or the dreaded deadpool - and I'm trying to figure out the best way to represent it. At this time, I'm using a combination of manually vetting, to soliciting public feedback through "edit this profile" button and showcasing the latest state. Rather than deleting entities that are no longer startups, I tag them with statuses like Public, Acquired, Shut Down, etc., and surface that on the profile page. Here is an example, https://startups.in/united-states/airbnb (you can find a badge under the logo and if you scroll down you can see a card that show the exit details).

As can be seen, Airbnb is marked as a "Public Company" with IPO metadata (ticker, exit date, exit value), and still remains in the database as part of the ecosystem rather than disappearing. The current idea is to treat this more like a longitudinal startup graph.

Long-term, I'd like this to behave more like a "historical record" of startups over time (dare say wikpedia for startups but presented differently?), not just a snapshot of "current startups". That way acquisitions, failures, and IPOs become first-class signals instead of reasons to delete data. Thanks again.


>Hey Peter, Thanks! I really appreciate the thoughtful feedback and your time.

I stand by what I said -- it is a really good idea!

>It is a challenge as startups transition all the time in different ways - funding rounds, IPOs or the dreaded deadpool - and I'm trying to figure out the best way to represent it. At this time, I'm using a combination of manually vetting,

I think that it's noble that you'd take it upon yourself to do this task manually, but this task may turn out to be too time-consuming and unsustainable into the future -- you might wish to consider outsourcing it and/or automating it with AI and/or automating it via trusted users who are entrusted to perform those updates... or of course, you could just continue to do it yourself...

>to soliciting public feedback through "edit this profile" button and showcasing the latest state. Rather than deleting entities that are no longer startups, I tag them with statuses like Public, Acquired, Shut Down, etc., and surface that on the profile page.

I think that's a good idea! More information, more transparency, more historical auditability, more information in general!

>Long-term, I'd like this to behave more like a "historical record" of startups over time (dare say wikpedia for startups but presented differently?), not just a snapshot of "current startups". That way acquisitions, failures, and IPOs become first-class signals instead of reasons to delete data.

It sounds like you are well on your way! I fully support that! "Wikipedia for Startups" sounds great, and if I needed to give a VC an elevator pitch of what you do in 60 seconds, or heck, 10 seconds or less, I'd phrase it exactly that way... "Wikipedia for Startups" (sounds great and communicates quickly!) or "Wikipedia for Startups but presented differently" (as you said!) or maybe "Wikipedia for Startups but presented with our own custom enhancements!" (sounds even better and would make the party on the other end ever more curious about it!).

But yes, looks great in general, I saw you're taking job listings (great idea, will help you monetize for the long haul!), and I think you're on a great track! (and of course, the world does very much need a "Wikipedia for Startups", however it is presented! :-) )

(Also, don't forget that Joel Spolsky made million$ when one of his interns added job search to his blog and Stack Overflow -- so the job search is a great way of monetizing and sustaining your vision, long term!)

So, wishing you a lot of luck!

It's brilliant, brilliant, brilliant, I say!


>I am working on a high-performance game that runs over ssh.

By 'ssh', you mean 'ssh' (library/program + protocol + encryption + decryption) on top of TCP/IP, on top of the Internet, right?

OK, I'm not against it... but you do understand that there are all kinds of ways for that to slow things down, right?

Your issues may (or may not!) include such things as:

o Nagle's algorithm AKA buffering AKA packets not being sent until N bytes (where N > 1) ready to send, as other posters have suggested;

o Slower encryption/decryption on older hardware (if users with older hardware is a target market, and if the added loss in speed makes an impact in gameplay, depending on the game, this may or may not be the case...)

o The fact that TCP/IP (as opposed to UDP / Datagrams / "Raw" sockets) imposes a connection-oriented abstraction, requiring additional round trips of ACK ("I got the packet") RESEND ("I didn't get the packet") on top of the connectionless architecture that is the Internet (https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...), which adds additonal latency, so, for example if a rural user in Australia experiences a 350ms delay for a raw packet to get to a U.S. server (or vice versa), then TCP/IP might make this 700ms or more, depending on the quality of the connection!

o The speed of the game limited to both the bandwidth and latency of the slowest user (if a multi-player game, and if the game must not update until that user "moves"... again, game architecture will determine this, and it wouldn't be applicable to all games...)

Now, you could use UDP, as other posters have suggested, but then you must manually manage connections and encryption...

That may be the right choice for some types programmers, some types of games/applications -- but equal-and-oppositely it may be the wrong choice for others...

Anyway, wishing you well with your game development!

I haven't used SSH (at least, not in a debug capacity), so I'm not sure what SSH debugging options exist -- but it would be nice if SSH had a full logging debug mode, which would explain exactly WHY it chose to send any given packet that it did along with related helpful information, such as latency/time/other metrics, etc., if it doesn't have this/these feature(s) already...


>"This program generates images from text prompts [...] using the [data from] FLUX.2-klein-4B [...] and is implemented entirely in C, with zero external dependencies beyond the C standard library."

You had me at image generation!

Pure C with zero external dependencies -- is just an extra added bonus...

No, actually, pure C with zero external dependencies -- is quite awesome in its own right!

Well done!


I absolutely love it!

(Now, I would have preferred a Lattice ICE40 FPGA as opposed to the Xilinx Spartan II XC2S100 FPGA, simply because the ICE40 toolchain is entirely open source (https://prjicestorm.readthedocs.io/en/latest/) but that's a very minor (less than 1%) extremely small "nitpick" -- on what should be praised and lauded as some truly great work!)

Anyway, to repeat, I absolutely love it!

Upvoted and favorited!

Well done!


>I absolutely love it!

We are in sync! I also fell in love with this project after seeing it on Hackaday. At first I was just impressed, but the more I dug in (pcb, vhdl) the more I couldnt stop obsessing over it :) Its super well documented, well structured and easy to follow. True hello world of building a 386/486 chipset. My HaD comment from 3 weeks ago:

HaD blog entry doesnt do justice to this AMAZING project. Author implemented:

    Intel 386/486 CPU bus handling
    ISA bus handling
    reused vintage 486 CPU
    reused vintage 8259 PIT (timer)
    reused vintage 8254 PIC (interrupts)
maniek86 build a legit vintage PC motherboard the way companies did back in mid eighties designing own Chipsets, all on his own in a span of few months. The only missing component is old school DRAM memory controller, skipping it is no brainer as driving DRAMs is almost an art form (as much digital as analog) and learning how to create one could take another year with most time spend chasing quirks and compatibility woes.

Want to hear something wild – this was maniek86s first 4 layer board ever :o Talk about jumping into deep water.

From reading maniek86 blog it all started when he got scammed buying Chinese no name ISA/PCI Post Code analyzer card that didnt really support ISA side https://maniek86.xyz/pl/blog.php?p=31 :

"It turned out that ISA part of the card was a scam – it could only measure voltages and show CLK, RDY, and reset signals. I was disappointed. I had to repair the motherboard without the help of POST codes. Eventually, I managed to fix it, but the card didn’t meet my expectations. That’s when I came up with the idea of building my own card instead of buying another one."

And so he did, just like Bender with blackjack and all! End result is https://maniek86.xyz/projects.php?p=41 https://github.com/maniekx86/isa_debug_post_card https://github.com/maniekx86/isa_debug_post_card_cpld_source deserving its own HaD entry. To make Post Code card maniek86 had to:

- learn how ISA bus works

- learn VHDL

- do digital archeology to dig up 17 year old Xilinx ISE that could support obsolete XC95144XL 5Volt CPLD

- learn about output buffers the hard way by frying first XC95144XL driving LEDd directly, didnt we all? :)

This Post Code analyzer card led directly to creation of M8SBC. What a hacking tour the force. I absolutely love it.


I'm guessing the Spartan II was used because it is compatible with 5V IO


Spartan 2 was used because it was free, author salvaged it together with Atmega128 from some scrap he had laying around :)

Here is a prototype https://imgur.com/gallery/486-homebrew-computer-lsUiWdw

The most impressive part of this build is that maniek86 (Piotr Grzesik) is still in High School (electronics oriented CTE).


Random Idea: A Completely Open-Source Banking App...

Consider an Open-Source Web Browser (Chromium, FireFox, ?, ???, or any open-source browser from: https://github.com/nerdyslacker/desktop-web-browsers).

OK.

We know the following:

A) That most Banks have web pages / websites which can be accessed via one or more of the above web browsers (AKA "Online Banking"), where the provided functionality is exactly the same, or very close to the functionality provided by stand-alone banking Apps

B) That the source code for any open-source web browser is available, and can be downloaded (A self-evident truth!)

From which the following understanding can be derived:

C) The security for the transactions (user authentication, authorization, etc., etc.) is NOT provided on the client side (the user's computer or smartphone) by an obfuscated "binary black box" piece of software where source code is not provided, but rather on the server side (the Bank's side!)

(Oh sure, Web Browsers provide encryption to prevent the middle segment of the communication path, the Internet, from listening in, but the encryption libraries of open-source web browsers are also typically themselves open-source, thus easily transferred to / imported into the source code bases / software component stack -- of other Apps!)

Well, if we know A), B), and C), then we also understand that a truly Open-Source Banking App, giving exactly the same security guarantees that an Open-Source Web Browser does today, is possible!

Such an app, if it were to exist, due to its open-source nature, would not be bound by artificial constraints, such as the absence or presence of an underlying rooted Smartphone, or not...

Also, in theory such an App, were it to exist, could be ran on very minimal, possibly more secure (than your average bloated Smartphone) alternative hardware...

Also, if you think about it... Bitcoin and other cryptocurrency apps -- are fundamentally that App (!) -- just that they use the Blockchain, and not a Bank, as the back-end! :-)

You know, you have a payment-provider App. It could have any number of back-ends to it... Bank, Blockchain, ?, ???

You tell me... :-)


>"This wasn’t a fully transparent codebase, though. Like many production appliances, a large portion of the Python logic was shipped only as compiled .pyc files."

Observation: One of the great virtues of Python is that typically when someone runs a Python program, they are running a Python interpreter on transparent Python source code files, which means that typically, the person that runs a Python program has the entirety of the source code of that program!

But that's not the case here!

.pyc, aka "pre-compiled, pre-tokenized, pre-digested" aka "obfuscated" python -- one of the roots of this problem -- is both a blessing and a curse!

It's a blessing because it allows Python code to have different interpretation/compilation/"digestion" stages cached -- which allows the Python code to run faster -- a very definite blessing!

But it's also (equal-and-oppositely!) -- a curse!

It's a curse because as the author of this article noted above, it allows Python codebases to be obfuscated -- in whole or in parts!

Of course, this is equally true of any compiled language -- for example with C code, one typically runs the compiled binaries, and compiled binaries are obfuscated/non-transparent by their nature. And that's equally true of any compiled language. So this is nothing new!

Now, I am not a Python expert. But maybe there's a Python interpreter switch which says 'do not run any pre-digested cached/obfuscated code in .pyc directories, and stop the run and emit an error message if any are encountered'.

I know there's a Python switch to prevent the compilation of source code into .pyc directories. Of course, the problem with this approach is that code typically runs slower...

So, what's the solution? Well, pre-created (downloaded) .pyc directories where the corresponding Python source code is not provided are sort of like the equivalent of "binary blobs" aka "binary black boxes" that ship with proprietary software.

Of course, some software publishers that do not believe in Open-source / transparent software might argue that such binary blobs protect their intellectual property... and if there's a huge amount of capital investment necessary to produce a piece of software, then such arguments are not 100% entirely wrong...

Getting back to Python (or more broadly, any interpreted language that has the same pre-compilation/obfuscation capability), what I'd love to see is a runtime switch, maybe we'd call it something like '-t' or '-transparent' or something like that, where if passed to the interpreter prior to running a program, then if it encounters a .pyc (or equivalent, for whatever format that language uses, call it "pre-tokenized", "pre-parsed", "pre-compiled" or whatever), then it immediately stops execution, and reports an error where the exact line and line number, the exact place where the code in the last good source file which called it, is reported, exactly to the end-user, and then execution completely stops!

This would allow easy discovery of such "black box" / "binary blob" non-transparent dependencies!

(Note to Future Self: Put this feature in any future interpreters you write... :-) )


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: