My understanding is food delivery companies take a huge cut (like 30%) so restaurants are forced to raise their prices significantly or risk losing customers. Even with that cut, food delivery customers still have to pay a significant delivery/service fee.
The author makes a good point about language capabilities enabling certain libraries to be written, just as DSL makes it easier to reason about problems and implement solutions with the right kind of abstractions and language ergonomics (usually at the expense of expressivity and flexibility).
There’s a time in my life where I designed languages and wrote compilers. One type of language I’ve always thought about that could be made more approachable to non technical users is an outline-liked language with English like syntaxes and being a DSL, the shape of the outline would be very much fixed and on a guardrail, and can’t express arbitrary instructions like normal programming languages, but an escape hatch (to more expressive language) for advanced users can be provided. An area where this DSL can be used would be common portal admin app generation and workflow automation.
That said, with the advent of AI assistants, I’m not sure if there is still room for my DSL idea.
If we sees ourselves less as a programmer and more as a software builder, then it doesn’t really matter if our programming skills atrophy in the process of adopting this tool, because it affords us to build at a higher abstraction level), kind of like how a PM does it. This up-leveling in abstractions have happened over and over in software engineering as our tooling improves over time. I’m sure some excellent software engineers here couldn’t write in assembly code to save their lives, but are wildly productive and respected for what they do - building excellent software.
That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.
AI assisted software building by day, artisanal coder by night perhaps.
I think this question can be answered in so many ways - first of all, piling abstraction doesn’t automatically imply bloating - with proper compile time optimizations you can achieve zero cost abstractions, e.g C++ compilers.
Secondly, bloated comes in so many forms and they all have different reasons. Did you mean bloated as in huge dependency installs like those node modules? Or did you mean an electron app where a browser is bundled? Or perhaps you mean the insane number of FactoryFactoryFactoryBuilder classes that Java programmers have to bear with because of misguided overarchitecting? The 7 layer of network protocols - is that bloating?
These are human decisions - trade-offs between delivering values fast and performance. Foundational layers are usually built with care, and the right abstractions help with correctness and performance. At the app layers, requirements change more quickly and people are more accepting of performance hits, so they pick tech stacks that you would describe as bloated for faster iteration and delivery of value.
So even if I used abstraction as an analogy, I don’t think that automatically implies AI assisted coding will lead to more bloat. If anything it can help guide people to proper engineering principles and fit the code to the task at hand instead of overarchitecting. It’s still early days and we need to learn to work well with it so it can give us what we want.
You'd have to define bloat first. Is internationalization bloat? How about screen reader support for the blind? I mean, okay, Excel didn't need a whole flight simulator in it, but just because you doing don't you use a particular feature doesn't mean it's necessarily bloat. So first: define bloat.
Some termite mounds in Botswana already reach over two meters high, but these traditional engineering termites will be left behind in their careers if they don't start using AI and redefine themselves as mound builders.
That’s really awesome to have a viable self bootstrapped project! Did you have to spend a lot of time maintaining it or deal with customer support after the initial launch? A low maintenance yet viable business would truly be the dream!
It is pretty close to that dream scenario now, yes.
Because the tech stack is stable (and fully matured), I almost never have to deal with 'emergency' technical support or bug fixes. The servers just hum along.
I do handle customer support myself, but the volume is very low relative to the traffic. 90% of the tickets are just non-technical questions about billing or ad-free subscriptions.
This low-maintenance overhead is exactly what allows me to work on new features or experiment with new projects (like my upcoming AI drawing school) without burning out.
Probably the fact that it's a pretty terrible idea. It means you take a normal properly typed API and smush it down into some poorly specified text format that you now have to write probably-broken parsers for. I often find bugs in programs that interact with `/proc` on Linux because they don't expect some output (e.g. spaces in paths, or optional entries).
The only reasons people think it's a good idea in the first place is a) every programming language can read files so it sort of gives you an API that works with any language (but a really bad one), and b) it's easy to poke around in from the command line.
Essentially it's a hacky cop-out for a proper language-neutral API system. In fairness it's not like Linux actually came up with a better alternative. I think the closest is probably DBus which isn't exactly the same.
I think you have to standardize a basic object system and then allow people to build opt-in interfaces on top, because any single-level abstraction will quickly be pulled in countless directions for as many users.
Probably that not everything can be cleanly abstracted as a file.
One might want to, e. G., have fine control over a how a network connection is handled. You can abstract that as a file but it becomes increasingly complicated and can make API design painful.
> Probably that not everything can be cleanly abstracted as a file.
I would say almost nothing can be cleanly abstracted as a file. That’s why we got ioctl (https://en.wikipedia.org/wiki/Ioctl), which is a bad API (calls mean “do something with this file descriptor” with only conventions introducing some consistency)
If everything can be represented as a Foo or as a Bar, then this actually clears up the discussion, allowing the relative merits of each representation to be discussed. If something is a universal paradigm, all the better to compare it to alternatives, because one will likely be settled on (and then mottled with hacks over time; organic abstraction sprawl FTW).
The fact that everything is not a file. No OS actually implements that idea including Plan9. For example, directories are not files. Plan9 re-uses a few of the APIs for them, but you can't use write() on a directory, you can only read them.
Pretending everything is a file was never a good idea and is based on an untrue understanding of computing. The everything-is-an-object phase the industry went through was much closer to reality.
Consider how you represent a GUI window as a file. A file is just a flat byte array at heart, so:
1. What's the data format inside the file? Is it a raw bitmap? Series of rendering instructions? How do you communicate that to the window server, or vice-versa? What about ancillary data like window border styles?
2. Is the file a real file on a real filesystem, or is it an entry in a virtual file system? If the latter then you often lose a lot of the basic features that makes "everything is a file" attractive, like the ability to move files around or arrange them in a user controlled directory hierarchy. VFS like procfs are pretty limited. You can't even add your own entries like adding symlinks to procfs directories.
3. How do you receive callbacks about your window? At this point you start to conclude that you can't use one file to represent a useful object like a window, you'd need at least a data and a control file where the latter is some sort of socket speaking some sort of RPC protocol. But now you have an atomicity problem.
4. What exactly is the benefit again? You won't be able to use the shell to do much with these window files.
And so on. For this reason Plan9's GUI API looked similar to that of any other OS: a C library that wrapped the underlying file "protocol". Developers didn't interact with the system using the file metaphor, because it didn't deliver value.
All the post-UNIX operating system designs ignored this idea because it was just a bad one. Microsoft invested heavily in COM and NeXT invested in the idea of typed, IDL-defined Mach ports.
Sure, why would they? COM was rendered irrelevant by the move to the web. Microsoft lost out on the app serving side, and when they dropped the ball on ActiveX by not having proper UI design or sandboxing they lost out on the client too. Probably the primary use case outside of legacy OPC is IT departments writing PowerShell scripts or Office plugins (though those are JS based now too).
COM has been legacy tech for decades now. Even Microsoft's own security teams publish blog posts enthusiastically explaining how they found this strange ancient tech from some Windows archaeological dig site, lol. Maybe one day I'll be able to mint money by doing maintenance consulting for some old DCOM based systems, the sort of thing where knowing what an OXID resolver is can help and AI can't do it well because there's not enough example code on GitHub.
Because since Windows Vista all new APIs are COM based, Win32 C API is basically stuck on Windows XP view of the universe, with minor exceptions here and there.
Anyone that has to deal with Windows programming quickly discovers that COM is not the legacy people talk about on the Internet.
Sure I mean, obviously the Windows API is COM based and has been for a long time. My point is, why seriously invest in the Windows API at all? A lot of APIs are only really being used by the Chrome team at this point anyway, so the quality of the API hardly matters.
Game development for one, and there are still plenty of native applications on Windows to chose from, like most stuff in graphics, video editing, DAW, life sciences and control automation, thankfully we don't need Chrome in box for everything.
Your remark kind of proves the point Web is now ChromeOS Platform, as you could have mentioned browser instead.
They have to an extent. The /proc file system on Linux is directly inspired by plan 9 IIRC. Other things like network sockets never got that far and are more related to their BSD kin.
Abstractions are inherently a tradeoff, and too much abstraction hurts you when the assumptions break.
For a major example, treating a network resource like a file is neat and elegant and simple while the network works well, however, once you have unreliable or slow or intermittent connectivity, the abstraction breaks and you have to handle the fact that it's not really like a local file, and your elegant abstraction has to be mangled with all kinds of things so that your apps are able to do that.
> For almost 70 years, American companies could deduct 100% of qualified research and development spending in the year they incurred the costs. Salaries, software, contractor payments — if it contributed to creating or improving a product, it came off the top of a firm’s taxable income.
According to the article, as long as the tech workers contribute to improving or creating a product (be it games or apps), they count as R&D cost.
I worked in games 2 years before the studio shutdown. It wasn't because of "R&D" tax breaks. None of the recent layoffs or studio closures are explained by that. Nor are the Microsoft, Dell, or Intel layoffs which aren't game-related.
To qualify for R&D tax breaks, IIRC having identified qualifying work for a segment of my firm, there must be elements of hypothesis, experimentation, results, etc that I would consider more science-y 'Research' than just turn the crank software 'Development.' It has to be both. And that has to be documented. And offshore research+development doesn't get you a tax break. The irony is that the R+D tax actually discourages onshore pure development as a 'trade' and encourages a split of onshore R+D and offshore D.
This sort of thing appears to be self-reported; I don't know if it ever gets audited. I don't know if big tech lies or creatively interprets what counts and that has contributed to the issue. But this article sort of over-represents what qualifies as R&D for US tax purposes.
Which makes sense. Software is functionally a capital asset, so really it should be depreciated across the length of the copyright term (unless the company wants to release it to the public domain to fully depreciate it early).
Maybe software should be a capital asset, but these depreciation rules don't fix that issue.
The rule says if you pay someone $200k to develop software: then you now have a $200k asset that then devalues to value of $0 over 5 years (starting midyear). That's just plain weird.
For our example a depreciation table might look like:
The final effect of the 174 rule change is that you still finally end up with a software asset worth $0. However you now have taxable income of $200k in year one and expenses equalling $200k spread over 5 years. The taxes paid could be a lot: although the taxation money is really just being lent to the government for a few years at 0%. The actual financial costs are fucking complicated.
Understanding accounting and taxes are two absolutely essential skills if you ever wish to be a founder (and useful anyways).
Finding a solution to dealing with the valuation of assets is difficult. The historical solution of depreciation is broken for software, intellectual property and goodwill. In theory, taxes on dividends and capital gains taxation already deal with the issue (company taxation at x% kinda ends up at $0 because the shareholder pays y% and claims back the x% through imputation).
Right, that weirdness is why it should be depreciated over the length of the copyright term. You spend $200k this year, and now you have a useful asset for the next 95 years (or 120 years if you never publish it).
If it turns out it's not useful, we could then allow companies publish the source and release it into the public domain to immediately "destroy" the asset (the copyright) and claim their deduction. So failed r&d projects would be deductible right away as long as the public gets them, and ones that result in a useful asset get depreciated based on how long they actually last, which is currently potentially multiple lifetimes.
I don't think copyright term is a good rubric/measure here. For SaaS, a company can keep the software locked up indefinitely, regardless of copyright term. Employees can be contractually obligated not to publish source code, even if the copyright has expired.
Amortizing development cost over the useful life of the software is maybe a reasonable thing to do (I don't think it is, but let's for a minute say I agree), but determining "useful life" is not simple.
I get your thinking here but copyright isn’t the only relevant intellectual property constraint.
Software built by a business is a trade secret independent of its copyrightability. Even after the expiry of copyright a business can continue to exploit it as a proprietary asset.
In my experience, most software is not a capital asset that can be sold. Most software is a one off throwaway script to generate a report, or a modification to an existing piece of software to change its behavior. Most software isn't even written by software companies, and now having a software engineer on staff is prohibitively expensive, despite their job being functionally similar to a factory worker.
By your logic, the salaries of technical writers should be amortized too, because theoretically the bank operations manual could also be a capital asset.
At the last couple of companies we worked at, they just sent out surveys on what time went to different activities. We couldn't possibly fill that out honestly, as that wasn't tracked.
Which, I think is an overlooked part of this. They must constantly have gotten feedback that people were lying to them.
Vivpage is a vim-like wsyiwyg webpage editor. You can use quick shortcut keys to navigate around, perform selection, insert/edit/delete elements/styles, and it comes complete with undo/redo capability.
I built this because I thought it'd be neat to have some of that vim-like agility when editing a webpage.
Paul Graham once described the quality of being formidable in a startup founder. If they say they'll get something done in 2 weeks, they'll get it done in 2 weeks, even if poorly done. Formidable founders stand the greatest chance at succeeding. System D certainly reminds me of that.
We're just trapped in the present day tech zeitgeist. Taking a step back and looking at things on a longer time scale, you'll see there has always been good people to keep things in balance, and the general state of things improves over time. There are great people working on tools to identify AI generated content, and even regular people will eventually catch up to the impact generative AI brings and adapt accordingly, like most other technologies / developments.
Unfortunately the last time we had a publishing revolution we had two world wars before we got the press properly regulated. We’re now having two publishing revolutions within 30 years and are only in early stage fascism
reply