If people assume it's "just a type of sausage" it suggests a dictionary entry is needed to explain otherwise.
It's a term referring to a small set of types of sausages served in a specific small set of ways. In some places, a hot dog can be used as a synonym for the predominant type of sausage most common in hot dogs in that place, but the term is still more commonly referring to the assembly of a wiener or frankfurter wrapped in a bread of some sort.
> the term is still more commonly referring to the assembly of a wiener or frankfurter wrapped in a bread of some sort
I had that disagreement in an alpine resort once. A seller was vending some sort of sausage stuffed in a bread, i was hungry so I walked up to them with money in hand and said "A hot dog please" while pointing at the only thing they were selling. The lady was mortified by my utterance, and was not willing to accept the money until I agreed with her that it is a bratwurst and not a hot dog. :D The disagreement felt a bit academical, but given that she was holding the hot dogs hostage and money does not taste that good she won the argument.
Personally think a bratwurst is borderline, in that it is "close enough" that I can see someone calling a bratwurst in a bread a hot dog, and I wouldn't react if a shop listed them as a type of hot dog on a menu.
But, yeah, some places "hot dog" also carries a connotation of potentially using lower quality sausages, so I can also totally see a bratwurst vendor taking offense...
In the US, if you ordered a hot dog and got a sausage (or vice versa), it would be very reasonable to return the item and ask for something else. They are culturally completely different, the same way Cheerios in milk is not another cold soup like gazpacho is.
All words in a thesaurus would generally also be in a dictionary? The difference between a thesaurus and a dictionary is what each tells you about a word.
I actually run an adults only community site and you are correct, I have it in a popup that appears on every "fresh" visit to the site, it's in the giant bold print you agree to when you register, and from a technical end, I send every possible header and other signal to let filtering software know it's an adult only space. If there is a child accessing that site, they are doing so because their parent didn't even attempt to prevent them from doing so. And now I'm having to look into ID verification services that are going to quintuple to costs of hosting this free community for people in a time where community is more important than ever.
Email is the digital equivalent of a postcard. I really want to argue that this is a bad idea (because it is), but depending how you set your email system up, it might actually compare favourably to using a third-party identity verification company.
Better to use some kind of secure drop web portal (perhaps https://securedrop.org/) that's actually designed for that kind of thing, though.
The first commit was 17k lines. So this was either developed without using version control or at least without using this gh repo. Either way I have to say certain sections do feel like they would have been prime targets for having an LLM write them. You could do all of this by hand in 2026, but you wouldn't have to. In fact it would probably take forever to do this by hand as a single dev. But then again there are people who spend 2000 hours building a cpu in minecraft, so why not. The result speaks for itself.
> The first commit was 17k lines. So this was either developed without using version control or at least without using this gh repo.
Most of my free-time projects are developed either by my shooting the shit with code on disk for a couple of months, until it's in a working state, then I make one first commit. Alternatively, I commit a bunch iteratively, but before making it public I fold it all into one commit, which would be the init. 20K lines in the initial commit is not that uncommon, depends a lot on the type of project though.
I'm sure I'm not alone with this sort of workflow(s).
Can you explain the philosophy behind this? Why do this, what is the advantage? Genuinely asking, as I'm not a programmer by profession. I commit often irrespective of the state of the code (it may not even compile). I understand git commit as a snapshot system. I don't expect each commit to be pristine, working version.
Lot of people in this thread have argued for squashing but I don't see why one would do that for a personal project. In large scale open source or corporate projects I can imagine they would like to have clean commit histories but why for a personal project?
I do that because there's no point in anyone seeing the pre-release versions of my projects. They're a random mess that changed the architecture 3 times. Looking at that would not give anyone useful information about the actual app. It doesn't even give me any information. It's just useless noise, do it's less confusing if it's not public.
I don't care about backing up unfinished hobby projects, I just write/test until arbitrarily sharing, or if I'm completely honest, potentially abandoning it. I may not 'git init' for months, let alone make any commits or push to any remotes.
Reasoning: skip SCM 'cost' by not making commits I'd squash and ignore, anyway. The project lifetime and iteration loop are both short enough that I don't need history, bisection, or redundancy. Yet.
Point being... priorities vary. Not to make a judgement here, I just don't think the number of commits makes for a very good LLM purity test.
you should push to a private working branch- and freqently. But, when merging your changes to a central branch you should squash all the intermediate commits and just provide one commit with the asked for change.
Enshrining "end of day commits", "oh, that didn't work" mistakes, etc is not only demoralizing for the developer(s), but it makes tracing changes all but impossible.
> I don't expect each commit to be pristine, working version.
I guess this is the difference, I expect the commit to represent a somewhat working version, at least when it's in upstream, locally it doesn't matter that much.
> Why do this, what is the advantage?
Cleaner I suppose. Doesn't make sense to have 10 commits whereas 9 are broken half-finished, and 10 is the only one that works, then I'd just rather have one larger commit.
> they would like to have clean commit histories but why for a personal project?
Not sure why it'd matter if it's personal, open source, corporate or anything else, I want my git log clean so I can do `git log --short` and actually understand what I'm seeing. If there is 4-5 commits with "WIP almost working" between each proper commit, then that's too much noise for me, personally.
But this isn't something I'm dictating everyone to follow, just my personal preference after all.
Fair enough. Thanks for the clarification. Personally, I think, everything before a versioned release (even something like 0.1) can be messy. But from your point I can see it that a cleaner history will have advantages.
Further, I guess if author is expecting contributions to the code in the future, it might be more "professional" for the commits to only the ones which are relevant.
My own projects, I consider, are just for my own learning and understanding so I never cared about this, but I do see the point now.
Regardless, I think it still remains a reasonable sign of someone doing one-shot agent-driven code generation.
One point I missed, that might be the most important, since I don't care about it looking "professional" or not, only care about how useful and usable something is: if you have commits with the codebase being in a broken state, then `git bisect` becomes essentially useless (or very cumbersome to use), which will make it kind of tricky to track down regressions unless you'd like to go back to the manual way of tracking those down.
> Regardless, I think it still remains a reasonable sign of someone doing one-shot agent-driven code generation.
Yeah, why change your perception in the face of new evidence? :)
Regarding changing the perception, I think you did not understand the underlying distrust. I will try to use your examples.
It's a moderate size project. There are two scenarios: author used git/some VCS or they did not use it. If they did not use it, that's quite weird, but maybe fine. If they did use git, then perhaps they squashed commits. But at certain point they did exist. Let's assume all these commits were pristine. It's 16K loc, so there must be decent number of these pristine commits that were squashed. But what was the harm in leaving them?
So these commits must have been made of both clean commits as well as broken commits. But we have seem this author likes to squash commits. Hmm, so why didn't they do it before and only towards the end?
Yes, I have been introduced to a new perception but it's the world does not work "if X, then not Y principles." And this is a case where the two things being discussed are not mutually exclusive like you are assuming. But I appreciate this conversation because I learnt importance and advantages of keeping clean commit history and I will take that into account next time reaching to the conclusion that it's just another one-shot LLM generated project. But nevertheless, I will always consider the latter as a reasonable possibility.
> I guess this is the difference, I expect the commit to represent a somewhat working version,
On a solo project I do the opposite: I make sure there is an error where I stopped last. Typically I put in in a call to the function that is needed next so i get a linker error.
6 months later when I go back to the project that link error tells me all I need to know about what comes next
Or first thousand commits were squashed. First public commit tells nothing about how this was developed. If I were to publish something that I have worked on my own for a long time, I would definitely squash all early commits into a single one just to be sure I don't accidentally leak something that I don't want to leak.
For example when the commits were made. I would not like to share publicly for the whole world when I have worked with some project of mine. Commits themselves could also contain something that you don't want to share or commit messages.
At least I approach stuff differently depending if I am sharing it with whole world, with myself or with people who I trust.
Scrubbing git history when going from private to public should be seen totally normal.
Hmm I can see that. Some people are like that. I sometimes swear in my commit messages.
For me it's quite funny to sometimes read my older commit messages. To each of their own.
But my opinion on this is same as it is with other things that have become tell-tale signs of AI generated content. If something you used to do starts getting questioned as AI generated content, it's better to change that approach if you find it getting labelled as AI generated, offensive.
If you have for example a personal API key or credentials that you are using for testing, you throw it in a config file or hard code it at some point. Then you remove them. If you don't clean you git history those secrets are now exposed.
Hello not the poster but I am BarraCUDA's author. I didn't use GIT for this. This is just one of a dozen compiler projects sitting in my folder. Hence the one large initial commit. I was only posting on github to get feedback from r/compilers and friends I knew.
The original test implementation of this for instance was written in OCaml before I landed on C being better for me.
I wonder if renaming variables to all reference a single movie or book (go through the exe and rename each new variable to the next word or letter in Monty Python's Holy Grail) would do anything.
Yes, and it seems like its purposefully ignored in the "body scan" debate. full CT scans would be more problematic, and MRI's (especially no contrast ones) don't pick up a lot of things... but having annual comparisons over a few years would likely fill in some of those gaps. literally and figuratively.
It's as if people have never had shipping itemized before.
The only reason aliexpress shopping is cheap is because the rest of the world foots the bill. Unless somebody has finally removed China's "Developing Country" status thats gotten them essentially free international parcel service for the best part of 100 years.
Yeah OK, but if I only want 5 pieces and I have to choose between $5 or $30, I'm not going to think about the geopolitical situation, I'm just going to get the cheaper one.
I buy small parts with "Choice" shipping on AliExpress sometimes, because it's cheap and [usually] quick and they take care of all of that pesky tariff and customs business in ways that never have an opportunity to surprise me.
For years now, the shipping process has worked like this for me: They gather it up on their end and send the stuff on a cargo plane to a sort that is at or near JFK airport in New York.
If the order includes things from several different sellers, then at some point they generally get combined into one bag.
From there, they just mail it -- using regular, domestic USPS service. It shows up in my mailbox on my porch in Ohio a few days later.
Although it certainly was a thing I've experienced in the past, at no point does the process I've described exploit the "Developing County" loophole. They just send things to the other side of the world (at their expense), and then pay the post office the same way as anyone else does to bring it to my door.
EDIT: Oh lord, bad typo in my previous comment- it should have been aliexpress SHIPPING not Shopping.
It's not the same, what you described is Direct Entry (somewhere around page 25, linked below). Apparently the Terminal Dues system has been massively changed in the 5 years since I last looked- but it still appears unfavorable to USPS and US sellers, while favoring high volume foreign shippers.
As for how aliexpress delivers stuff, since the tarrifs: 1) no-name last mile. 2) USPS last mile, and USPS the entire way.
I don't know if any are associated with "Choice", Paid store shipping, and/or free store shipping.
Since I normally buy from aliexpress to avoid the insane 200-800% markups amazon/ebay/walmart/etc dropshippers demand the $5-$10 in shipping doesnt factor in.
As a consumer, here's how AliExpress Choice shipping functions for me: Like buying a widget from a shop downtown, the price is the price.
I don't see what anyone will pay (or has paid) for duties or tariffs or fees or delivery, I don't have any idea what the markup is at any level, and I don't know what GAO table they or anyone else used to get it to happen. That's outside of my purvey.
With this method: Same as with the shop downtown, I'm not importing anything myself; I don't see any customs forms or declarations at all. AliExpress handles all of that business, not me.
I can peek behind the curtain a bit and see some aspects of how things move from place to place as physical entities using the tracking data that they provide. And that's about it, until it eventually shows up inside of my mailbox -- and then I can have a nice gander at the labels and see that it was sent with USPS domestic postage.
This process doesn't (can't, AFAICT) abuse my nation's postal system, and I like that aspect quite a lot.
The downsides are cost and availability: There may be a dozen or more sellers offering seemingly-identical widgets on AliExpress, but maybe only one or two (if any) that ship that particular widget Choice. Like Prime, it can actually end up costing a bit more than other methods.
But it's fast, still cheap in absolute terms, and there's zero BS on my end so I like those parts, too.
reply