Hacker Newsnew | past | comments | ask | show | jobs | submit | timothycrosley's commentslogin

FWIW, I wrote isort, but am seriously considering migrating my projects to use Ruff. Long term I think the design is just better over the variety of tools we use within the Python ecosystem today. The fact we have a plethora of projects that are meant to run per a commit with each one reparsing the AST independently, and often using a different approach to do so, just feels untenable long term to me.


That is about as large of an endorsement as I can conceive. Will definitely have to check it out!


BTW, thank you for isort!


> isort's completely random… For example the latest version I tried decided to alphabetically sort all the imports, regardless if they are part of standard library or 3rd party. This is a big change of behaviour from what it was doing before.

This is not isort! isort has never done that. And it has a formatting guarantee across the major versions that it actively tests against projects online that use it on every single commit to the repository: https://pycqa.github.io/isort/docs/major_releases/release_po...


It did this to me today…


Are you using any custom settings?


No. Seems they changed the default ordering


Hi! I said this with more certainty than I should have. Software can always have bugs! For reference, I wrote isort, and my response came from the perspective that I have certainly worked very hard to ensure it doesn't have any behavior that is random or non-deterministic. From your description, it sounds like someone may have turned on force-alphabetical-sort (if this is in a single project). See: https://pycqa.github.io/isort/docs/configuration/options.htm.... You can do `isort . --show-config `, to introspect the config options isort finds and where it finds them from within a project directory. The other thing I could think of, is coming from isort 4 -> 5, I wouldn't think it would fully ignore import groupings, but maybe it doesn't find something it used to find automagically from the environment for determining a first_party import. If that's the case this guide may be helpful: https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html. If none of this helps, I'd be happy to help you diagnose what your seeing.


I upgraded the underlying docker image… so python version and all dependencies got bumped. I did not change any configuration or script.

I now use version 5.6.4, from 4.3.4. In the end we passed a flag to keep the old behaviour, but in my mind behaviours shouldn't just change.


While we are sharing meaningless anecdotes, almost everyone I know (since I live in Seattle) has been vaccinated. No one has reported any issues, a couple have since successfully had a child. None have even gotten a breakthrough infection of covid. Among friends I know from back east, who haven't gotten vaccinated, one had just turned 30, only health issue was obesity. He died. One was 35, best shape of any of my friends, was in ICU for 7 days with double pneumonia. And one miscarriage.

But hey, the good thing is, we also have data. And the data, shows no correlation between vaccination and these problems you're referring to, and yet a VERY HIGH correlation (and some good reasons to say causation) between getting covid and having these issues.


It's not as settled as you seem to think.

Leading vaccine developer Nikolai Petrovsky (who's working on a traditional, protein-based vaccine) recently mentioned in an interview that if he had a pregnant wife he'd advise her to avoid both the virus and the vaccine (something only the privileged could attempt, so not a one-size-fits-all recommendation) [1]

(In a more technical interview aimed at a scientific audience, he outlines a number of issues he has with the current options. [2])

One of Petrovsky's key issues is that on pregnancy and children, the sensitivity is so high and risks so great that there is usually a much, much higher bar before vaccines are authorised for use: that's been the history of traditional, protein-based vaccines where it can take decades before they're authorised for use in pregnant women, babies, children.

Pfizer only began their pregnancy and safety trials in February this year - so only a little over 7 months ago. It is designed to observe pregnancy through to newborns reaching 6 months of age, and will complete in a year.

So we currently have no safety data in pregnancies from pre-conception via all-important and sensitive first trimester, through to full term + 6 months.

Keep in mind the WHO changed position on safety and aligned with the CDC on recommending the vaccine 3 weeks before Pfizer even started its safety trials.

None of this is to say that getting Covid isn't currently provably worse than getting a current vaccine.

It's just to say the safety data is incomplete, there are still unknowns which could change the calculation significantly considering the nature of the technology used, and we just won't fully understand the issues for some time to come.

(Also keep in mind that with mandates, the proposal is for all to receive the current options, but the alternative is not for all pregnant women to become infected. The risk calculation generally assumes wrongly here.)

[1] https://www.doctorlewis.com.au/podcast-1/2021/7/19/episode-2...

[2] https://www.youtube.com/watch?v=yL_2Rq1zoRg&t=3063s


I personally think avoiding getting infected ever, just is so impractical as to not lay out as an option. As evidence more and more is showing both infected and vaccinated people are likely to spread the infection (even if they don't get symptoms) as soon as 3 months after the acquired "immunity". Combined with how infectious delta is, this third choice of never getting infected just feels disingenuous.


What's the hospitalization rate for 20s - 40s for getting the vaccine? What's the hospitalization rate for covid?


Ontario data on vaccine side effects, particularly peri/myocarditis, by age:

https://www.publichealthontario.ca/-/media/documents/ncov/ep...

Note that only 55% were hospitalized, average stay 2 days.


Do we even know the real number? VAERS is highly underreported and how many vaccine cases are just getting labeled as "covid cases"... I love the southpark skit about this "covid related"... worth a watch.


This is complete nonsense. If anything VAERS would be over-reported, the people who are most against vaccination, blast it so often I see it in my facebook feed more often than advertisements. Anyone can add to VAERS, and I'm sure with all the antivaxx advertisement of it, they DO. Not to mention that doctors are required to, and report even unrelated deaths that happen right after vaccination. Meanwhile the reporting standards for covid, are the same as flu, nothing's changed there just a bunch of people grasping at straws, toward what goal I can't imagine.


I use this for the isort in browser demo: https://pycqa.github.io/isort/docs/quick_start/0.-try/ it was really awesome to be able to directly use a Python package without any wrangling or modification as is usually required with other Python on the browser solutions.


Articles like this: https://www.news-medical.net/news/20201026/COVID-19-now-like... make me think that's seriously unlikely. I mean unless, you are talking about one of the historical flu pandemics. Yes, this is less deadly then the 1918 flu was in 1918. Then again, I think the 1918 flu would also be less deadly then it was in 1918 if it occurred for the first time today because of other medical advancements.


https://www.cdc.gov/nchs/nvss/vsrr/covid_weekly/index.htm

Less than 80 people under age age of 15 have died from COVID. Child deaths from COVID are essentially zero.

In the same timeframe about 110 died from influenza. Immunizing children for influenza also seems unnecessary.


The problem with examples like this is that simple FileIO operations are optimized across all languages. It would likely have similar speed in very naive unoptimized Python.

Here is an example:

Given `create_csv.py`:

  with open("output.csv", "w") as csv_file:
    for i in range(1_000_001):
      csv_file.write(",".join([str(i)] * 100))  # creating 100 columns per entry out of a million
      csv_file.write("\n")
and `search_csv.py`:

  import sys
    
  for row in open("output.csv"):
    if "1000000" in row.split(","):
      print(row)
      break
  else: 
    sys.exit("row 1 million not found!")
With the first script creating a 1 million line csv with each row containing 100 columns, and the second one being a worse case search (has to make it all the way to last row, has to search every column). The performance is better than what you mentioned on commodity hardware, very unoptimized Python code, and default Python3 installation:

time python3 create_csv.py

  real    0m3.267s
  user    0m2.589s
  sys     0m0.608s
time python3 search_csv.py

1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000,1000000

  real    0m6.132s
  user    0m5.954s
  sys     0m0.162s


Can confirm that I would be happy to collaborate on this project, it looks really cool!


That would be amazing. It's easy to turn CrossHair into an absurdly slow fuzz tester (imagine hashing or printing your inputs early in the process). I think the ideal product would be good at both symbolic and concrete tactics, and the minimization logic of hypothesis would be really nice to have too. I will be in touch!


Portfolio of a slow but precise fuzzer + fast imprecise fuzzer is the easiest integration. Start both together and return the one which fails first. SMT solvers are often complementary with samplers.

However, it would be very interesting to see if a closer integration of symbolic and sampling methodologies is possible.


Reminder that we officially live in a post Python 2.x world. I think at this point the correct answer to the awkwardness of threads is to use async.


I'm a principal level software engineer with experience providing strong technical direction for development teams. I have extensive experience designing and developing complex web applications and large scale data processing pipelines. Working with teams to create and maintain both low and high-level documentation while working with customers to define requirements. A knack for simplifying and organizing the complex, enabling teams to scale. Core developer behind many successful Open Source projects. I'm always excited to learn more and to tackle new problems.

  Location: Seattle, WA, USA
  Remote: Yes
  Willing to relocate: No
  Technologies/Languages: Python, JavaScript, C/C++, Ruby, YAML, TOML, HTML, CSS, Sass, LESS.
  Technologies/Frameworks: Spark, Hive, Django, Compass, Zope, QT, PySide, GTK, TK, MEAN, Angular, hug, flask.
  Technologies/Databases and Caches: Hadoop, Oracle, PostgreSQL, MYSQL, MongoDB, Redis, Memcache, ElasticSearch, Solr, Google’s Cloud Datastore.
  Resume/CV: github.com/timothycrosley, timothycrosley.com
  Email: timothy.crosley@gmail.com


It's funny you say this. I've always had a suspicision that every budding programmer I encounter studies forward looking things, and if a young person did study something ancient like COBOL, they'd probably have great job security and stand out for the few companies that need that kind of developer.


Speaking of COBOL - the next "big thing" of that nature is likely going to be Visual Basic 6.

It's an albatross around Microsoft's neck, but every time they update Windows, they keep around the runtime DLLs - because so many businesses have software written internally and otherwise that can't migrate to something else.

If you know VB6 - and you are confident in migration to another platform, or willing to maintain old code (maybe while migrating) - you'll likely have work long into the future.

The most likely migration path would be from VB6 to VB.NET or to C# - staying on the Windows platform. Another option would be migration to GAMBAS or Mono (aka .NET for *nix).

Those feeling adventurous might try Python with QT, or some other GUI framework; at some point, it might be better just to examine and understand the core logic and flow - then convert it all over to a web-accessible system (of whatever choice you want).

I imagine that in time we'll see some kind of VB6 to WASM compiler or something, if someone hasn't already taken a stab at it. What we won't see, though, is Microsoft open-sourcing VB6 or anything like that. They've said they want to, but due to the various licenses used in the development of the language (and components) - it's virtually impossible for them to do it.

I coded in VB (3-6) for well over a decade, but it's been forever since I last touched it. That said, I'll always have a soft-spot for BASIC (having grown up on a version of Microsoft BASIC on the TRS-80 Color Computer line) - so I could probably pick up where I left off once I rebooted the VS compiler/IDE, without too much trouble.

I expect that might be where my career turns to as I get older (currently 46 and working in SPA Javascript/NodeJS apps).


Maybe.

But, you'd have to be a certain kind of person. And, if you are, that is totally cool. Most programmers I encounter thirst for the New Thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: