Hacker Newsnew | past | comments | ask | show | jobs | submit | heisig's commentslogin

Unfortunately I would not be surprised if the real death toll is even higher. I have first-hand information. We are talking about indiscriminate shooting with heavy machine guns into peaceful protests, happening in every city of the country. The rule of law has completely broken down. The wounded avoid hospitals because they are afraid of getting killed there.


Unfortunately those videos exist. There are videos of relatives walking for hours from body bag to body bag to find the remains of their lost ones. There are videos of people with heavy machine guns shooting indiscriminately into peaceful protests. There are videos of executions. Everything has been recorded.

There is a reason why the Iranian government cannot activate internet and phones anymore. Once people can communicate again, they will count and document the true scale of events. Right now, it seems the Iranian government would rather give up on internet and telephones altogether than having anyone find out, which tells you just about how bad the situation is.


> There is a reason why the Iranian government cannot activate internet and phones anymore. Once people can communicate again, they will count and document the true scale of events. Right now, it seems the Iranian government would rather give up on internet and telephones altogether than having anyone find out, which tells you just about how bad the situation is.

I had talked to an iranian person who had misconfigured internet provider so I was able to talk to them on a forum. They mentioned that phone calls are still there in the daytime tho (they are cut at night), Sim,internet,starlink all are blocked

If someone's from Iran/related to it feel free to correct me but has there been any recent development where phone calls are completely shut off?


Phone calls are unencrypted, that's why


> Phone calls are unencrypted, that's why

Agreed did I tell you about the fact that iranian people if you call on their phone calls from foreign numbers you would've received message from AI and I think that a lot of conspiracy theories were formed about it which were really scary but the consensus is that the Iranian govt will record your voice when you would be worried or osmething

Absolutely scary stuff.


Let me comment as an SBCL user: This is outstanding work, and I can now remove a lot of performance hacks from my code because the default hash tables became equally fast!

Also, this technique eliminates a number of worst-case scenarios and inefficiencies, which is a boon for any hash table user.


I recently switched to uv, and I cannot praise it enough. With uv, the Python ecosystem finally feels mature and polished rather than like a collection of brittle hacks.

Kudos to the uv developers for creating such an amazing piece of software!


Yeah, switched to writing python professionally ~4 years ago, and been low key hating the ecosystem. From a java and javascript background, it's mostly been npm/mvn install and it "just works". With python, there's always someone being onboarded that can't get it to work. So many small issues. Have to have the correct version per project, then have to get the venv running. And then installing it needs to build stuff because there's no wheel, so need to set up a complete c++ and rust toolchain etc., just to pull a small project and run it.

uv doesn't solve all this, but it's reduced the amount of ways things can go wrong by a lot. And it being fast means that the feedback-loop is much quicker.


I cannot share the same experiences. mvn is a buggy mess, randomly forgetting dependencies, and constantly needing a full clean to not die on itself. npm and the entire js ecosystem feels so immature with constant breaking changes, and circular dependency hell, when trying to uppgrade stuff.


I've seen mvn projects that spin like a top and others that were a disaster.

I think it's little recognized that there is a scaling limit for snapshots. If you have 20 people developing 20 projects and they are co-located in the same room with the server builds work 50-80% of the time and people think it's fine. If you're the one guy who is remote and has a slow connection, builds work 0% of the time. The problem is that at slightly different times you get slightly different snapshots that aren't compatible with each other -- it's a scaling problem because if you add enough developers and enough projects it will eventually get you.

I've worked at other places when the mvn clean was necessary every time; other developers thought this shouldn't be necessary and I was a doofus except I was able to make consistent progress like a ratchet on the project and get it done and they weren't.

Where I am now mvn is just fine, whenever it screws up there's a rational explanation and we're doing it wrong.


That's an issue with the packages themselves though, not with package management as a whole. You and the comment above you are talking about different things. While there's plenty of pain to be had with npm, if you have a project that used to work years ago, you can generally just clone, install and be done, even if on older versions. On Python this used to mean a lot of hurt, often even if it was a fresh project that you just wanted to share with a colleague.


For value of "years" greater than 1?

Node/NPM was a poster child of an ecosystem where projects break three times a week, due to having too many transitive dependencies that are being updated too often.


This argument makes no sense. Your dependencies don't change unless you change them, npm doesn't magically update things underneath you. Things can break when you try to update one thing or another, yes, but if you just take an old project and try and run it, it will work.


Assuming the downloads still exist? Does NPM cache all versions it ever distributed?

That's always one major thing I saw breaking old builds: old binaries stop being hosted, forcing you to rebuild them from old source, which no longer builds under current toolchains - making you either downgrade the toolchain that itself may be tricky to set up, or upgrade the library, which starts a cascade of dependency upgrades.

It's not like Node projects are distributed with their deps vendored; there's too much stuff in node_modules.


> Does NPM cache all versions it ever distributed?

Yes it does, that's the whole point. You can still go and install the first version of express ever put on npm from 12 years ago. You can also install any of the 282 releases of it that have ever been put on npm since then. That's the whole point of a registry, it wouldn't be useful if things just disappeared at some random point in time.

The only packages that get removed are malware and such, and packages which the vendor themselves manually unpublish [0]. The latter has a bunch of rules to ensure packages that are actually used don't get removed, please see the link below.

[0] https://docs.npmjs.com/policies/unpublish


IIRC there is a package whose whole point is to include everything else in its package.json and make them ineligible for unpublish.


You're using a different, not hosted anymore package, three times a week? That's somewhere between very unusual and downright absurd.

Yes you can find edge cases with problems. Using this as an argument for "breaks 3 times per week" does not hold.


No, I was using this as an argument for why I don't expect Node projects older than a year or two to be buildable without significant hassle.

(Also note that outside the web/mobile space, projects that weren't updated in a year are still young, not old. "Old" is more like 5+ years.)

The two things are related. If your typical project has a dependency DAG of 1000+ projects, a bug or CVE fix somewhere will typically cause a cascade of potentially breaking updates to play out over multiple days, before everything stabilizes. This creates pressure for everyone to always stay on the bleeding edge; with a version churn like this, there's only so many old (in the calendar sense) package dists that people are willing to cache.

This used to be a common experience some years back. Like many others, I gave up on the ecosystem because of the extreme fragility of it. If it's not like that anymore, I'd love to be corrected.


I don't know if it is still as fragile as you remember but if you just never update your package-lock then it is super stable as you (transitive) dependencies never change.

The non-trivial exception being if some dependecy was downloading resources on the fly (maybe like a browser compat list) or calling system libraries (eg running shell commands)


> npm doesn't magically update things underneath you

It used to prior to npm 5 when lockfiles were introduced (yarn introduced lockfiles earlier).


Projects breaking so frequently on npm and node is simply not the case unless you are trying upgrade an old project, one dependency per day…


I'm not saying mvn or npm is perfect. But the issues they have are consistent. My coworker and I would either have the same issues or not any issues. But with python it's probably more ways of running the project in the team than there are people, all with small tweaks to get it working on their system.


Python has been mostly working okay for me since I switched to Poetry. (“Mostly” because I think I’ve run into some weird issue once but I’ve tried to recall what it was and I just can’t.)

uv felt a bit immature at the time, but sounds like it’s way better now. I really want to try it out... but Poetry just works, so I don’t really have an incentive to switch just yet. (Though I’ve switched from FlakeHeaven or something to Ruff and the difference was heaven and hell! Pun in’tended.)


A lot of Wagtail usage is with Poetry. Tends to be projects with 30-50 dependencies. It "just works" but we see a lot of people struggle with common tasks ("how do I upgrade this package"), and complain about how slow it is. I don’t have big insights outside of Wagtail users but I don’t think it’s too different.


n=1 but i've tried "manual" .venv, conda/miniconda, pipenv, poetry, and finally now at uv. uv is great. poetry feels like it's focused on people who are publishing packages. uv is great to use for personal dev, spinning up/down lots of venv, speedy, and uvx/uv script is very convenient compared to having all my sandbox project in one poetry env.


Ok, you convinced me to give it a try. Tbh, I am a casual user of python and I don't want to touch it unless I have a damn good reason to use it.


You do not need a damn good reason for this. Just try it out on a simple hello world. Then try it out on a project already using poetry for eg.

uv init

uv sync

and you're done

I'd say if you do not run into the pitfalls of a large python codebase with hundreds of dependencies, you'll not get the bigger argument people are talking about.


I don't think you need to sync, do you? It always just does it when running.

That said, I do wish uv had `uv activate`. I like just working in the virtualenv without having to `uv run` everything.


I do usually include instructions in our READMEs to do a `uv sync` as install command, in order to separate error causes, and also to allow for bootstrapping the venv so that it's available for IDEs.


That makes sense, thanks.


You can still `source .venv/bin/activate(.fish)` and skip the uv run bit. I have Fish shell configured to automatically activate a .venv if it finds one in a directory I switch to.


I do do that, can you please share your fish script to autoload it? I have something for Poetry envs, but not venv dirs.


Sure thing - so I mostly ended up using this for activating a .venv in a fabfile directory using this...

    function __auto_fab --on-variable PWD
        iterm2_print_user_vars
        if [ -d "fabfile" ]
            if [ -d "fabfile/.venv" ]
                if not set -q done_fab
                    and not set -q VIRTUAL_ENV
                    echo -n "Starting fabfile venv... "
                    pushd fabfile > /dev/null
                    source .venv/bin/activate.fish  --prompt="[fab]"
                    popd > /dev/null
                    set -g done_fab 1
                    echo -e "\r Fabfile venv activated         "
                end
            else
                echo "Run gofab to create the .venv"
            end
        end
    end

I've since deleted the one to do a .venv in this directory, but I think it was roughly this...

    function __auto_venv --on-variable PWD
        if [ -d ".venv" ]
            if not set -q done_venv
                echo -n "Starting venv... "
                source .venv/bin/activate.fish  --prompt="[venv]"
                set -g done_venv 1
                echo -e "\r Venv activated         "
            end
        end
    end

(just tested that and it seems to work - the --prompt actually gets overridden by the project name from uv's pyproject.toml now though so that's not really necessary, was useful at some point in the past)

These live in ~/.config/fish/conf.d/events.fish


Thank you!


I'm not them, but I use `direnv` for this. Their wiki includes two layout_uv[1] scripts, one that uses `uv` just to activate a regular venv and a second that uses it to manage the whole project. I use the latter.

[1] https://github.com/direnv/direnv/wiki/Python


That's great, thanks! I use direnv but didn't know they had this.


Custom layouts are awesome. You can set up any script to run when direnv runs, so you can support just about anything you want even before direnv adds a builtin.


I keep going back and forth on ‘uv run’. I like being explicit with the tooling, but it feels like extra unneeded verbosity when you could just interact with the venv directly. Especially since I ported a bunch of scripts from ‘poetry run’


> I am a casual user of python and I don't want to touch it unless I have a damn good reason to use it.

I... what? Python is a beautiful way to evolve beyond the troglodyte world of sh for system scripts. You are seriously missing out by being so pertinently against it.


Just you wait till someone shows you how Rust is to Python what Python is to shell scripts. For one, null safety is a major issue in most corporate Python code, and much less of an issue in Rust code.


Rust is decidedly not a scripting language.

Don't get me wrong, Rust is great and I use it too, but for very different purposes than (system) scripts.


Now, if I hadn't read literally the same message for Pipenv/Pipfile and poetry before, too...

Python is going through package managers like JS goes through trends like classes-everywhere, hooks, signals etc


There have been incremental evolutionary improvements that were brought forth by each of the packages you named. uv just goes a lot further than the previous one. There have been others that deserve an honorary mention, e.g. pip-tools, pdm, hatch, etc. It's going to be very hard for anything to top uv.


But how does it work with components that require libraries written in C?

And what if there are no binaries yet for my architecture, will it compile them, including all the dependencies written in C?


IMO if you require libraries in other languages then a pure python package manager like uv, pip, poetry, whatever, is simply the wrong tool for the job. There is _some_ support for this through wheels, and I'd expect uv to support them just as much as pip does, but they feel like a hack to me.

Instead there is pixi, which is similar in concept to uv but for the conda-forge packaging ecosystem. Nix and guix are also language-agnostic package managers that can do the job.


But for example, if I install the Python package "shapely", it will need a C package named GEOS as a shared library. How do I ensure that the version of GEOS on my system is the one shapely wants? By trial and error? And how does that work with environments, where I have different versions of packages in different places? It sounds a bit messy to me, compared to a solution where everything is managed by a single package manager.


You are describing two different problems. Do you want a shapely package that runs on your system or do you want to compile shapely against the GEOS on your system. In case 1 it is up to the package maintainer to package and ship a version of GEOS that works with your OS, python version, and library version. If you look at the shapely page on pypi you'll see something like 40 packages for each version covering most popular permutations of OS, python version and architecture. If a pre-built package exists that works on your system, then uv will find and install it into your virtualenv and everything should just work. This does means you get a copy of the compiled libraries in each venv.

If you want to build shapely against your own version of GEOS, then you fall outside of what uv does. What it does in that case is download the all build tool(s) specified by shapely (setuptools and cython in this case) and then hands over control to that tool to handle the actual compiling and building of the library. It that case it is up to the creator of the library to make sure the build is correctly defined and up to you to make sure all the necessary compilers and header etc. are set up correctly.


In the first case, how does the package maintainer know which version of libc to use? It should use the one that my system uses (because I might also use other libraries that are provided by my system).


The libc version(s) to use when creating python packages is standardised and documented in a PEP, including how to name the resulting package to describe the libc version. Your local python version knows which libc version it was compiled against and reports that when trying to install a binary package. If no compatible version is found, it tries to build from source. If you are doing something 'weird' that breaks this, you can always use the --no-binary flag to force a local build from source.


You could use a package manager that packages C, C++, Fortran and Python packages, such as Spack: here's the py-shapely recipe [1] and here is geos [2]. Probably nix does similar.

[1]: https://github.com/spack/spack/blob/develop/var/spack/repos/... [2]: https://github.com/spack/spack/blob/develop/var/spack/repos/...


That's what I mean, in this case pip, uv, etc. are the wrong tool to use. You could e.g. use pixi and install all python and non-python dependencies through that, the conda-forge package of shapely will pull in geos as a dependency. Pixi also interoperates with uv as a library to be able to combine PyPI and conda-forge packages using one tool.

But conda-forge packages (just like PyPI packages, or anything that does install-time dependency resolution really) are untestable by design, so if you care for reliably tested packages you can take a look at nix or guix and install everything through that. The tradeoff with those is that they usually have less libraries available, and often only in one version (since every version has to be tested with every possible version of its dependencies, including transitive ones and the interpreter).

All of these tools have a concept similar to environments, so you can get the right version of GEOS for each of your projects.


Indeed, I'd want something where I have more control over how the binaries are built. I had some segfaults with conda in the past, and couldn't find where the problem was until I rebuilt everything from scratch manually and the problems went away.

Nix/guix sound interesting. But one of my systems is an nVidia Jetson system, where I'm tied to the system's libc version (because of CUDA libraries etc.) and so building things is a bit trickier.


with uv (and pip) you can pass the --no-binary flag and it will download the source code and build all you dependencies, rather than downloading prebuilt binaries.

It should also respect any CFLAGS and LDFLAGS you set, but I haven't actually tested that with uv.


I just tried --no-binary with the torchvision package (on a Jetson system). It failed. Then I downloaded the source and it compiled without problems.


This type of situation is why I use Docker for pretty much all of my projects—single package managers are frequently not enough to bootstrap an entire project, and it’s really nice to have a central record of how everything needed was actually installed. It’s so much easier to deal with getting things running on different machines, or things on a single machine that have conflicting dependencies.


Docker is good for deployment, but devcontainer is nice for development. Devcontainer uses Docker under the hood. Both are also critically important for security isolation unless one is explicitly using jails.


What exactly prevents you from creating your own packages if you want to use your system package manager?

On Alpine and Arch Linux? Exactly nothing.

On Debian/Ubuntu? maybe the convoluted packaging process, but that's on you for choosing those distributions.


On Nvidia/Jetson systems, Ubuntu is dictated by the vendor.


UV is not (yet) a build system and does not get involved with compiling code. But easily lets you plug in any build system you want. So it will let you keep using whatever system you are currently using for building your C libraries. For example I use scikit-build-core for building all of my libraries C and C++ components with cmake and it works fine with uv.


    uv build
    Building source distribution...
    running egg_info
    writing venv.egg-info/PKG-INFO
    Successfully built dist/venv-0.1.0.tar.gz
    Successfully built dist/venv-0.1.0-py3-none-any.whl


I guess it depends on what you mean by a build system. From my understanding uv build basically just bundles up all the source code it finds, and packages it into a .whl with the correct metadata. It cannot actually do any build steps like running commands to compile or transform code or data in any way. For that you need something like setuptools or scikit-build or similar. All of which integrate seamlessly with uv.


It actually does exactly what pip does depending on your configured build backend, so if you have your pyproject.toml/setup.py configured to build external modules, `uv build` will run that and build a binary wheel


Yes, that's my point. You need to bring your own 'real' build system to make uv doing anything non-trivial. And the fact that this work transparently with uv is a very good thing.


I see what you mean. You can use it with mise that has build support.


Yes it'll build any dependency that has no binary wheels (or you explicitly pass --no-binary) as long as said package supports it (i.e. via setup.py/pyproject.toml build-backend). Basically, just like pip would



Unlike uv this tool is unlikely to solve problems for the average Python user and most likely will create new ones.


Agree, however for user who want to get faster speed out of python wouldn't that just work with rustpython? It can also run in the browser then.


RustPython is just an interpreter written in Rust. There's no reason why it would be meaningfully faster than CPython just because it's written in Rust rather than C. Rust adds memory safety, not necessarily speed.

A new and immature interpreter is going to have other problems:

- Lack of compatibility with CPython - Not up to date with latest version features - Incompatibility with CPython extensions

RustPython is a cool project, but it's not reached the big time yet.


I fully agree. Limiting the amount of copies of software to sell them like a finite good has so many downsides:

1. There may be people who cannot use/afford some software, although there is technically an infinite supply.

2. Collaboration becomes awkward. Either all contributors give up their rights (Open Source), or one contributor holds all the rights and the rest is being treated unfairly. The latter decreases the incentives to make software modular and reusable.

3. The resulting software typically gets worse due to some copyright enforcement mechanisms. For example, no closed source software will ever have a good debugger, because that would allow viewing and changing the source code.

4. It creates a power imbalance between software owners and software users. Nearly all software has to be adapted over time, but the software owner has a monopoly on performing such adaptations. The result is enshittification, surveillance, and basically a return to feudalism where daily life is governed by a small number of overlords.

5. It is not clear how to price software fairly, and there is also little incentive to do so.

6. My impression is that high-quality software converges to formal proof, which is AFAIK not copyrightable.

For all these reasons, I think it is time to consider a world without copyright on software.

To those that worry about salaries in such a world: Negotiate payment in advance (contracts, crowdfunding, bounties, ...), or get a job where software is created as a byproduct (consultant, researcher, tester, ...).


Now Microsoft just sounds like pre-Brexit Britain. Why reflect on your own shortcomings when you can blame the EU instead :)

I suggest Microsoft follows Britain's example and leaves. The main difference is that we Europeans actually miss the Brits, whereas nobody would miss Microsoft and its shoddy products and business practices.

On a more serious note, I fully understand that the Digital Markets Act is causing Microsoft headaches. But I think this headache is well deserved. Big Tech has been building moats where they should have built bridges, and now our computing landscape resembles medieval Germany where everything was at the mercy of a few feudal lords. It is time to drive out those lords and reshape software in a way that empowers, not enslaves.


Yes, the #. reader macro is one of the ways how you can achieve this in Common Lisp. Using the reader macro is also way more efficient because you don't awkwardly use your compiler as an interpreter for a weird subset of your actual language - you simply call to compiled code.

Seeing Greenspun's tenth rule [1] in action again and again is one of the weird things we Common Lisp programmers have to endure. I wish we would have more discussions on how to improve Lisp even further instead of trying to 'fix' C or C++ for the umpteenth time.

[1] https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule


>I wish we would have more discussions on how to improve Lisp even further instead of trying to 'fix' C or C++ for the umpteenth time.

I agree one million percent; projects like SBCL are great, but my impression is that there are tons of improvements to be had in producing optimized code for modern processors (cache friendliness, SIMD, etc), GPU programming etc. I asked about efforts in those directions here and there, but did not get very clear answers.


I don’t know much about Common Lisp, but one of the times I evaluated it I wondered why it fairs so poorly in benchmarks[1], and as a complete noob I went and checked what sort of code it will produce for something completely trivial, like adding 2 fixnums. And oh my god:

  * (defun fx-add (x y)
      (declare (optimize (speed 3) (safety 0) (debug 0))
               (type fixnum x y))
      (+ x y))
  FX-ADD
  * (disassemble #'fx-add)
  ; disassembly for FX-ADD
  ; Size: 104 bytes. Origin: #x7005970068                       ; FX-ADD
  ; 68:       40FD4193         ASR NL0, R0, #1
  ; 6C:       00048B8B         ADD NL0, NL0, R1, ASR #1
  ; 70:       0A0000AB         ADDS R0, NL0, NL0
  ; 74:       E7010054         BVC L1
  ; 78:       BD2A00B9         STR WNULL, [THREAD, #40]         ; pseudo-atomic-bits
  ; 7C:       A97A47A9         LDP TMP, LR, [THREAD, #112]      ; mixed-tlab.{free-pointer, end-addr}
  ; 80:       2A410091         ADD R0, TMP, #16
  ; 84:       5F011EEB         CMP R0, LR
  ; 88:       C8010054         BHI L2
  ; 8C:       AA3A00F9         STR R0, [THREAD, #112]           ; mixed-tlab
  ; 90: L0:   2A3D0091         ADD R0, TMP, #15
  ; 94:       3E2280D2         MOVZ LR, #273
  ; 98:       3E0100A9         STP LR, NL0, [TMP]
  ; 9C:       BF3A03D5         DMB ISHST
  ; A0:       BF2A00B9         STR WZR, [THREAD, #40]           ; pseudo-atomic-bits
  ; A4:       BE2E40B9         LDR WLR, [THREAD, #44]           ; pseudo-atomic-bits
  ; A8:       5E0000B4         CBZ LR, L1
  ; AC:       200120D4         BRK #9                           ; Pending interrupt trap
  ; B0: L1:   FB031AAA         MOV CSP, CFP
  ; B4:       5A7B40A9         LDP CFP, LR, [CFP]
  ; B8:       BF0300F1         CMP NULL, #0
  ; BC:       C0035FD6         RET
  ; C0: L2:   090280D2         MOVZ TMP, #16
  ; C4:       2AFCFF58         LDR R0, #x7005970048             ; SB-VM::ALLOC-TRAMP
  ; C8:       40013FD6         BLR R0
  ; CC:       F1FFFF17         B L0
  NIL
Are you serious? This should be 1, max 2 instructions, with no branches and no memory use.

Furthermore, I’ve also decided to evaluate the debuggers available for Common Lisp. However, despite it being touted as a debugger-oriented language, I think the actual debuggers are pretty subpar, compared to debuggers available for C, C++, Java or .NET. No Common Lisp debugger supports watchpoints of any kind. If a given debugger supports breakpoints at all, they’re often done through wrapping code in code that triggers a breakpoint, or making this code run under interpreter instead of being native. Setting breakpoints in arbitrary code won’t work, it needs to be available as source code first. SBCL with SLIME doesn’t have a nice GUI where I could use the standard F[N] keys to step, continue, stop, etc. I don’t see any pane with live disassembly view. No live watch. LispWorks GUI on the other hand looks like a space station, where I struggle to orient myself. The only feature that is somewhat well-done is live code reload, but IMO it’s something far less important than well-implemented breakpoints and watchpoints in other languages, since the main thing I need the debugger for is to figure out what the hell a given piece of code is doing. Editing it is a completely secondary concern. And live code reload is also not unique to Common Lisp.

Debugger-wise, Java and .NET seem to be leading in quality, followed by C and C++.

[1]: Yes, I have read many comments about the alleged good performance of Common Lisp, but either authors of these comments live in a parallel reality with completely different benchmark results, or they’re comparing to Python. As such I treat those comments as urban legends.


Skill issue ;)

  * (defun fx-add (x y)
        (declare (optimize (speed 3) (safety 0) (debug 0))
                 (type fixnum x y))
        (the fixnum (+ x y)))
  FX-ADD
  * (disassemble 'fx-add)
  ; disassembly for FX-ADD
  ; Size: 6 bytes. Origin: #x552C81A6                           ; FX-ADD
  ; 6:       4801FA           ADD RDX, RDI
  ; 9:       C9               LEAVE
  ; A:       F8               CLC
  ; B:       C3               RET
  NIL


what does (the ....) do?


> what does (the ....) do?

Specifies the type of the form. In this example, it tells the CL compiler that the returned `(+ x y)` is a `fixnum`.


Oh thank you!!


You can also get the same result by declaring the function's type:

  * (declaim (ftype (function (fixnum fixnum) fixnum) fx-add))
  (FX-ADD)
  * (defun fx-add (x y)
      (declare (optimize (speed 3) (safety 0) (debug 0)))
      (+ x y))
  FX-ADD
  * (disassemble 'fx-add)
  ; disassembly for FX-ADD
  ; Size: 6 bytes. Origin: #x552C81A6                           ; FX-ADD
  ; 6:       4801FA           ADD RDX, RDI
  ; 9:       C9               LEAVE
  ; A:       F8               CLC
  ; B:       C3               RET
  NIL


Im sorry i similar things on my sbcl, and i cannot see the output you are seeing.

CL-USER> (defun fx-add (x y)

(declare (optimize (speed 3) (safety 0) (debug 0))

         (type fixnum x y))
(+ x y))

[OUT]: FX-ADD

CL-USER> (disassemble #'fx-add)

; disassembly for FX-ADD

; Size: 27 bytes. Origin: #x55498736 ; FX- ADD

; 36: 48D1FA SAR RDX, 1

; 39: 48D1FF SAR RDI, 1

; 3C: 4801FA ADD RDX, RDI

; 3F: 48D1E2 SHL RDX, 1

; 42: 710A JNO L0

; 44: 48D1DA RCR RDX, 1

; 47: FF1425E8070050 CALL [#x500007E8] ; #x54602570: ALLOC-SIGNED-BIGNUM-IN-RDX

; 4E: L0: C9 LEAVE

; 4F: F8 CLC

; 50: C3 RET

[OUT]: NIL

``` it looks inefficient but i think it can still be optimized better. SBCL is mediocre at best at optimizations but most people do say that. I have not heard anyone calling it as fast as or faster than C/C++. I think sbcl being mediocre has more to do with how the compiler and its optimizations are structured rather than some inherent inefficiency of common lisp.

I agree with you that lisps have a terrible UX problem. But that is slowly changing. Alive for VSCode is a nice tool if you want to try it out. I personally use neovim for my CL development. fits me well enough.

The biggest reason why someone might choose CL for performance would be the abiliy to simply hack in the compiler as it is running to make custom optimization passes, memory allocations and structuring, very minute GC and allocator control, custom instructions and ability to tell sbcl how to use them WITHOUT RELOADING SBCL.

Those are just the basics in my opinion. Performance has nothing to do with the language...except ig the start times? Beyond that its a game of memory management[which, im pretty sure you can hack too in CL]

all this low level power while still being able to write soo much abstraction that you can slap a whole ass language in CL like APL!


> SBCL is mediocre at best at optimizations but most people do say that.

I would think that it takes a bit more knowledge to judge that.

You'll need to understand a bit more how to declare types to achieve better results.


I mean medicore compared to c++/c Not Java or c#. My bad.


> Are you serious? This should be 1, max 2 instructions, with no branches and no memory use.

Sure. ---> One could add two fixnums and get a bignum.

    CL-USER 31 > (fixnump (+ MOST-POSITIVE-FIXNUM MOST-POSITIVE-FIXNUM))
    NIL
As you see here, adding two fixnums can have a result which is not a fixnum.

Yes, Common Lisp does by default determine whether to return a bignum or fixnum.

The machine code you've shown takes care of that.

Let's see what the SBCL file compiler says:

    CL-USER> (compile-file "/tmp/test.lisp")
    ; compiling file "/tmp/test.lisp" (written 11 JUL 2024 10:02:04 PM):

    ; file: /tmp/test.lisp
    ; in: DEFUN FX-ADD
    ;     (DEFUN FX-ADD (X Y)
    ;       (DECLARE (OPTIMIZE (SPEED 3) (SAFETY 0) (DEBUG 0))
    ;                (TYPE FIXNUM X Y))
    ;       (+ X Y))
    ; --> SB-IMPL::%DEFUN SB-IMPL::%DEFUN SB-INT:NAMED-LAMBDA 
    ; ==>
    ;   #'(SB-INT:NAMED-LAMBDA FX-ADD
    ;         (X Y)
    ;       (DECLARE (SB-C::TOP-LEVEL-FORM))
    ;       (DECLARE (OPTIMIZE (SPEED 3) (SAFETY 0) (DEBUG 0))
    ;                (TYPE FIXNUM X Y))
    ;       (BLOCK FX-ADD (+ X Y)))
    ; 
    ; note: doing signed word to integer coercion (cost 20) to "<return value>"
    ; 
    ; compilation unit finished
    ;   printed 1 note


    ; wrote /tmp/test.fasl
    ; compilation finished in 0:00:00.031
    #P"/private/tmp/test.fasl"
    NIL
    NIL
Well, the SBCL compiler does tell us that it can't optimize that. Isn't that nice?!

If you want a fixnum to fixnum addition, then you need to tell the compiler that the result should be a fixnum.

   (the fixnum (+ x y))
Adding above and now the compiler no longer gives that efficiency hint:

    CL-USER> (compile-file "/tmp/test.lisp")
    ; compiling file "/tmp/test.lisp" (written 11 JUL 2024 10:03:11 PM):

    ; wrote /tmp/test.fasl
    ; compilation finished in 0:00:00.037
    #P"/private/tmp/test.fasl"
    NIL
    NIL


Connection Machine Lisp never made it into production, but this paper had a profound impact on my scientific career. In particular, it was the following comment in the paper that triggered me to develop the Petalisp programming language (https://github.com/marcoheisig/Petalisp):

> Nevertheless, we have implemented (on a single-processor system, the Symbolics 3600) an experimental version of Connection Machine Lisp with lazy xappings and have found it tremendously complicated to implement but useful in practice.

I think there is a moral here: Don't hesitate to experiment with crazy ideas (lazy xappings), and don't be afraid to openly talk about those experiments.

After eight years of development, I can definitely confirm that lazy arrays/xappings are tremendously complicated to implement but useful in practice :)


For me, it was the papers that made me realise how we could have a world were programming could abstract the underlying architecture, while still taking advantage of it being heterogeneous.

Something that only recently is starting to take shape in compute landscape.

Petalisp looks cool.


Not sure what you consider "production", but I used *Lisp extensively on the CM2 at Xerox PARC in the late 1980s. In fact, I published several papers based on this research.


i'm assuming you mean 'lazy' in the context of the paper and not more a general system of lazy evaluation. what was in particular that you found difficult about binding pure functions under the notion of a xapping? thanks so much for posting your work


Implementing lazy arrays or xappings naively is easy - the Petalisp reference backend has just 94 lines of code [1]. The challenge is to implement them efficiently. With eager evaluation, the programmer describes more or less precisely what the computer should do. With lazy evaluation, you only get a description of what should be done, with almost no no restriction on how to do it. To make lazy evaluation of arrays as fast as eager evaluation, you have to automate many of the high-level reasoning steps of an expert programmer. Doing so is extremely tedious, but once you have it you can write really clean and simple programs and still get the performance of hand-crafted wizard codes.

[1] https://github.com/marcoheisig/Petalisp/blob/master/code/cor...


Are you referencing strictness analysis?


This paper is from last year, and a lot of interesting things have happened in the meantime:

- The parallel garbage collector is now part of SBCL, but not yet the default. Enabling it is attractive, though, because it has roughly twice the throughput.

- The SBCL maintainers used the parallel GC as reason to make SBCL's memory management overall more modular. This process is still ongoing, but experimenting with different GC strategies on SBCL has never been easier.

- Another GC is already in the works that has virtually zero pause times. Once this is merged, we can finally bury that old myth that Lisp systems stutter (if it hasn't already been buried by the presence of video games written in Lisp, like Kandria)

The parallel GC is an amazing achievement by Hayley Patton. We SBCL users cannot thank her enough for her outstanding work!


> The parallel garbage collector is now part of SBCL, but not yet the default.

Why is it not yet the default? Higher latency, not yet tested enough, or something else?


It's not always faster, so making it the default could cause regressions; it seems pretty stable now but the default (gencgc) had a head-start of a few decades for bug finding (and making).


How can you enable it? It's not obvious from the compilation instructions.


Compile SBCL with the command ./make.sh --with-mark-region-gc and the resulting build will use the mark-region GC. (Picking the GC at start-time instead of build-time would be very nice, but technically hard to pull off unfortunately.)


Also don't forget to clear your FASLs since they're not compatible. e.g. remove ~/.cache/common-lisp/sbcl-2.4.4-linux-x64/ and ~/.slime/fasl/


Heh, thanks :)

Petalisp author here - this ELS paper is just a preview. I'm also preparing a 160 page document (for my PhD) that will explain everything in more detail. I'll post on HN when it is available.

There is also a recording of my ELS talk on Twitch: https://www.twitch.tv/videos/2138821711?t=00h40m35s


I was a bit surprised to see no mention of StarLisp [1]. Is this just a fundamentally different approach? I can imagine that targeting modern machines rather than the Connection Machine would be very different, but I thought the notations used in StarLisp were nice and could be reused.

[1] https://en.wikipedia.org/wiki/*Lisp


Great work. I've been following Petalisp for a couple of years now. It is one of my favourite CL projects


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: