I'm not sure why that 2022 number is so much higher than the previous years listed.
I guess the stimulus checks and stock market movements during the pandemic brought the median and average closer together for a year? Regardless it didn't hold, it's back below 2007's level again today by any measure.
I mean kind of? I feel like other than allies of necessity (to counter other great powers) there isn't really a point in pretending to be friendly to countries that are different to us in practically every way.
Is the cultural difference really that big? Bigger, then say, the difference between NYC and rural Kansas?
Me and my generation (born in the 80s) of Western European have grown up admiring the US. Listening to your music, watching your movies, wearing your brands. And we still do, mostly.
The unease seems to have started some time after 9/11 though. European countries joined various wars, that turned out to be mostly a grab for control of oil states. (WMD anyone?)
And the US basically just stopped leading the way on international cooperation. Instead of cofounding the Internation Court of Justice, the US threatened to invade The Hague because of it. Instead of leading the way on averting climate change, having the tech, the global power and the money to do so, the US chose to block much of the initiative coming from elsewhere. And there've been many similar things.
So yeah, to me at least the US feels kind of like an old friend that's been derailed. By 9/11, perhaps.
I'd love to be proven wrong. I'd love to come back to visit the US more often in the future. But with this administration, I just won't risk it. And also.. I just don't want to, at the moment. :-/
Honestly fair enough. And for the record, I do not want the US and Europe to stop cooperating, I just think that we have lost much of the intrinsic cultural reasoning for allying in the first place. But at the same time, your statement about the urban/rural culture gap kind of refutes my point anyways. Either way, we still have much bigger fish to fry with Russia and China on the horizon, so we definitely have more in common with each other government wise compared to the rest of the world.
I'd say many people in the EU have similar views and ideas to about 2/3rds of the US (maybe I'm being generous on the US size), the half of the US that doesn't think the world is flat, that global warming might be happening, that following the rule of law is a good thing and we don't need to destroy the US to fix it.
Europe is more alike most Americans then you are to other Americans. Why pretend to be friendly with other states why not breakup. Cities in states should break apart from rural areas. We can all go back to tribal hunting groups.
Can someone explain a non-project based workflow/configuration for uv? I get creating a bespoke folder, repo, and uv venv for certain long-lived projects (like creating different apps?).
But most of my work, since I adopted conda 7ish years ago, involves using the same ML environment across any number of folders or even throw-away notebooks on the desktop, for instance. I’ll create the environment and sometimes add new packages, but rarely update it, unless I feel like a spring cleaning. And I like knowing that I have the same environment across all my machines, so I don’t have to think about if I’m running the same script or notebook on a different machine today.
The idea of a new environment for each of my related “projects” just doesn’t make sense to me. But, I’m open to learning a new workflow.
Addition: I don’t run other’s code, like pretrained models built with specific package requirements.
`uv` isn't great for that, I've been specifying and rebuilding my environments for each "project".
My one off notebook I'm going to set up to be similar to the scripts, will require some mods.
It does take up a lot more space, it is quite a bit faster.
However, you could use the workspace concept for this I believe, and have the dependencies for all the projects described in one root folder and then all sub-folders will use the environment.
But I mean, our use case is very different than yours, its not necessary to use uv.
FYI, for anyone else that stumbles upon this: I decided to do a quick check on PyTorch (the most problem-prone dependency I've had), and noticed that they recommending specifically no longer using conda—and have since last November.
I personally have a "sandbox" directory that I put one-off and prototype projects in. My rule is that git repos never go in any dir there. I can (and do) go in almost any time and rm anything older than 12 months.
In your case, I guess one thing you could do is have one git repo containing you most commonly-used dependencies and put your sub-projects as directories beneath that? Or even keep a branch for each sub-project?
One thing about `uv` is that dependency resolution is very fast, so updating your venv to switch between "projects" is probably no big deal.
> The idea of a new environment for each of my related “projects” just doesn’t make sense to me. But, I’m open to learning a new workflow.
First, let me try to make sense of it for you -
One of uv's big ideas is that it has a much better approach to caching downloaded packages, which lets it create those environments much more quickly. (I guess things like "written in Rust", parallelism etc. help, but as far as I can tell most of the work is stuff like hard-linking files, so it's still limited by system calls.) It also hard-links duplicates, so that you aren't wasting tons of space by having multiple environments with common dependencies.
A big part of the point of making separate environments is that you can track what each project is dependent on separately. In combination with Python ecosystem standards (like `pyproject.toml`, the inline script metadata described by https://peps.python.org/pep-0723/, the upcoming lock file standard in https://peps.python.org/pep-0751/, etc.) you become able to reproduce a minimal environment, automate that reproduction, and create an installable sharable package for the code (a "wheel", generally) which you can publish on PyPI - allowing others to install the code into an environment which is automatically updated to have the needed dependencies. Of course, none of this is new with `uv`, nor depends on it.
The installer and venv management tool I'm developing (https://github.com/zahlman/paper) is intended to address use cases like yours more directly. It isn't a workflow tool, but it's intended to make it easier to set up new venvs, install packages into venvs (and say which venv to install it into) and then you can just activate the venv you want normally.
(I'm thinking of having it maintain a mapping of symbolic names for the venvs it creates, and a command to look them up - so you could do things like "source `paper env-path foo`/bin/activate", or maybe put a thin wrapper around that. But I want to try very hard to avoid creating the impression of implementing any kind of integrated development tool - it's an integrated user tool, for setting up applications and libraries.)
That's my main use case not-yet-supported by uv. It should not be too difficult to add a feature or wrapper to uv so that it works like pew/virtualenvwrapper.
E.g. calling that wrapper uvv, something like
1. uvv new <venv-name> --python=... ...# venvs stored in a central location
2. uvv workon <venv-name> # now you are in the virtualenv
3. deactive # now you get out of the virtualenv
You could imagine additional features such as keeping a log of the installed packages inside the venv so that you could revert to arbitrary state, etc. as goodies given how much faster uv is.
So this is probably just me not understanding your use case, but surely this is a nearly identical workflow?
1. uv init <folder-name> # venv stored in folder-name/.venv
2. cd <folder-name> # running stuff with uv run will automatically pick up the venv
3. cd .. # now you get out of the virtualenv
The UX improvement would be to have a centralized managemend of the venv (centralized location, ability to list/rm/etc. from name instead of from path).
I've worked like you described for years and it mostly works. Although I've recently started to experiment with a new uv based workflow that looks like this:
To open a notebook I run (via an alias)
uv tool run jupyter lab
and then in the first cell of each notebook I have
!uv pip install my-dependcies
This takes care of all the venv management stuff and makes sure that I always have the dependencies I need for each notebook. Only been doing this for a few weeks, but so far so good.
Why not just copy your last env into the next dir? If you need to change any of the package versions, or add something specific, you can do that without risking any breakages in your last project(s). From what I understand uv has a global package cache so the disk usage shouldn't be crazy.
Yeah, this is how I feel too. A lot of the movement in Python packaging seems to be more in managing projects than managing packages or even environments. I tend to not want to think about a "project" until very late in the game, after I've already written a bunch of code. I don't want "make a project" to be something I'm required or even encouraged to do at the outset.
I have the opposite feeling, and that's why I like uv. I don't want to deal with "environments". When I run a Python project I want its PYTHONPATH to have whatever libraries its config file says it should have, and I don't want to have to worry about how they get there.
I set up a "sandbox" project as an early step of setting up a new PC.
Sadly for certain types of projects like GIS, ML, scientific computing, the dependencies tend to be mutually incompatible and I've learned the hard way to set up new projects for each separate task when using those packages. `uv init; uv add <dependencies>` is a small amount of work to avoid the headaches of Torch etc.
I also like to keep my anti-tiger rock on me at all times. I don't really care that there's no evidence that it works. All I know is what I see, and I haven't seen any tigers.
Don't be coy, please enlighten us as to what this conspiracy is involving Russia that you think the MSM peddled, and what evidence you have that disproves the narrative.
The Steele dossier, which purported to contain evidence of the Trump camapign's links with Russia, turned out to actually be a Russian plant. That's what I'm talking about. People still peddle its contents as if they're anything other than fake news. That's a major problem. Same with Trump's 'very fine people' comment. You can accuse Rogan of spreading misinformation until the cows come home, but the mainstream media has also peddled its own share.
How many have you been doxxed for or impeached for or censored from spreading. as far as I'm aware, all your conspiracy theories have been promulgated by everyone and allowed to spread everywhere. I think that's the major difference. You should create your list. Twitter/X is a great way to spread such information to the public at large! No one will censor you. You are free :)
I haven't heard any mention of the dossier in years, other than as an artifact of the past. A quick search, and I can't find sources trying to claim its truth (or evidence of smoke, for which there might be a fire) in years.
I certainly didn't mention Rogan—I'm aware of his existence, but I've actually never heard him speak nor seen any transcripts of anything he's said. But trying to minimize the flood of absolute obvious shit that comes from right-wing outlets by choosing to point to Rogan specifically is a bit telling.
Anyone and everyone should be called out for lies they manufacture or spread. This includes lies on the left, lest you think I'm granting one side a pass.
which claim? Almost everyone agrees that people across the political spectrum believed the election had fraud in 2020.
As for evidence of actual fraud. I'll just wave my hands towards the fact that democrat turnout in 2020 is way out of line with national trends, whereas 2024 is exactly in line with past trends.
Now you're making two claims, neither of which you're providing evidence for.
If "everyone agrees...the election had fraud", I'm sure you can provide multiple reputable polls showing this sentiment of "everyone". (I'll be generous and lower the bar to just a majority of Americans, but I'm not going to accept polls that show -only- a majority of Republicans, since your claim is "across the political spectrum")
Second. Even if, as you've admitted elsewhere, every single court case was lost (or denied due to lack of standing) in 2020, you do realize that evidence can be presented outside courts, right? Where is this evidence? They've had 4 years to collect it. If it's widespread and, as you said elsewhere is a statistical anomaly, then it, almost by definition, should be obvious to spot. Hand waving to vibes and feels and "sure seems obvious to me" doesn't mean jack.
Now, I'll grant you, that vibes and feels certainly mean a lot to the animal natures of all of us. But feels and vibes are not proof of anything.
So for me, I saw the evidence with my own eyes when thousands of ballots came in for Joe Biden, and my immediate thought was... oh so those must certainly be fake. That's evidence enough for me. We all have our own bar.
> If "everyone agrees...the election had fraud", I'm sure you can provide multiple reputable polls showing this sentiment of "everyone". (I'll be generous and lower the bar to just a majority of Americans, but I'm not going to accept polls that show -only- a majority of Republicans, since your claim is "across the political spectrum")
I will restate. A significant (not a majority) number of democrats believed the election was fraudulent. Enough believed it was fraudulent, that this should concern anyone bothered about democracy and legitimacy. Here are the polls:
Look you probably think you 'won' because I mis-spoke about a majority of democrats, but these numbers are .. not great. 45% of the country, 11% of democrats? The results have stayed stable across time. Guess what, they get to vote too? You have to convince them. One easy way to do that is to share their incredulity that 1000s of ballots come in 100% for Biden in the middle of the night.
You know, you can approach this like a scientific hypothesis testing, or you can approach it the way everyday voters do. I think this is a choice that democrats need to make. By and large, the 'social sciences' are not very good at understanding human behavior because they don't understand what drives people. They're the 'men without chests' that CS Lewis talks about.
> So for me, I saw the evidence with my own eyes when thousands of ballots came in for Joe Biden, and my immediate thought was... oh so those must certainly be fake. That's evidence enough for me. We all have our own bar.
By that standard of evidence, I know a magical spell that's able to turn someone into a shapeshifter.
This should go some way to explaining why I don't treat my immediate thoughts on a small surprise to be sufficient.
> Look you probably think you 'won' because I mis-spoke about a majority of democrats, but these numbers are .. not great. 45% of the country, 11% of democrats? The results have stayed stable across time. Guess what, they get to vote too? You have to convince them. One easy way to do that is to share their incredulity that 1000s of ballots come in 100% for Biden in the middle of the night.
> You know, you can approach this like a scientific hypothesis testing, or you can approach it the way everyday voters do.
That's the problem, there is no proof that will convince them. And I don't mean that derogatorily. In generally, one can't be convinced by facts when they arrived at a conclusion by feelings. Especially, when so much astroturfing that was done about "I'm not saying there's fraud, but a lot of people are saying it.". If you have people in high places, that know better (because we have them on record saying they know 2020 was legit), spreading fear for the sake of it, all it does is create a false narrative of some overwhelming consensus, which then just feeds on itself. "See, look at all of these tweets, posts, articles, mentioning other people mentioning that they have a bad feeling about 2020."
There was no evidence that anything was untoward in the 2020 election—and people have had 4 years at this point to present evidence that there was. The problem with your example is that people looked at something that has occurred for almost every election in the past 100 years, that precinct by precinct votes often come in over time and in groups (you cannot be surprised that a precinct, as in a specific small area, is more homogenous than not). You can find pictures of chalkboards and primitive displays of election-night results coming in even from the early 20th century (probably before this too). But people took these static, after the fact, incremental updates to the running total as some sort of horse race. Like the NYTimes was tracking a basketball game. It's not, to pick a computer metaphor, it's like showing a progress bar for a count of all of the items in a database. The computer literally cannot count all of the "red" or "blue" items in one atomic operation; it has add them up, incrementally.
This has been proposed elsewhere, but in addition to actual speeding up the counting in certain states (and removing the barriers to doing counting ahead of of time(looking at you, PA)), is the idea of making it illegal to post results until some representative threshold of the results are in. As in, once you can be statistically certain that a different outcome isn't possible (maybe 99.99% or something), then you can post your first update of the results. Obviously, no one would accept that we should wait until all of the results are in, because oversees/military/absentee ballots might take days to arrive (even if they were mailed the day before election day). And, one obvious solution is maybe to require, at a national level, that all ballots must be received by COB election day, so that they can be tabulated in a timely manner.
Again, I fully understand that most people operate off of vibes and feels. Even highly educated people, outside of their domain (and maybe even inside their domain, if they have a vested interest in not being proved wrong), will default to vibes and feels. But you literally CANNOT prove a negative. So, just like it's impossible to prove no aliens have visited us, no one can prove that the 2020 election wasn't "stolen". Which is why we require evidence to rebut claims (extraordinary or not). No one has provided proof that aliens have visited us. And no one has provided proof that the 2020 election was "stolen"—as in the outcome would've been different. Yup, there's always some shitheads, ne'er-do-wells, and honestly-mistaken-about-their-elibilibity-to-vote-people that are caught each year. (Sadly, especially for the GOP, the number of GOP-voting voters that fall into the category outnumber the Dem-voting voters.) But, not once, at least that I've been able to find record of, in the past 40+ years, has identified fraud been able to change the result of any election from state-representative on up.
I'm sure they came up with something, it's quite literally the teacher's job, and I'm hesitant to believe any movie is perfect. What I want to see is if Steven tried to rebuttal the feedback at all. I'm imagining the Ron Swanson scene at Home Depot where he says, "I know more than you."
> it's quite literally the teacher's job, and I'm hesitant to believe any movie is perfect.
If I were the teacher, I'd simply say, "Congratulations, great job," and leave it at that. Otherwise, the teacher would risk being ridiculed for nitpicking one of the greatest films ever.
See the "CPI-Adjusted to 2022" column https://dqydj.com/net-worth-by-year/