Hacker Newsnew | past | comments | ask | show | jobs | submit | tmvphil's commentslogin

Zohran isn't proposing putting any new units under rent control (really rent stabilization), only temporarily halting raises to rents for existing stabilized units. This will make it harder for the city to attract new buildings to join rent stabilization in the future, but will benefit existing habitants. It won't have any effect on the ability to profitably develop market rate units at all.


Property developer here. I have zero faith that NYC would not put rent control on new units in the future. I will invest nothing in NYC and will tell every other developer I know to avoid it like the plague.


If NYC actually makes it easy to build there's practically infinity investment available. Sure dude, nobody will build >1M condos because you told them not to.


Mamdani is not advocating for it. His goals include attempts to make development cheaper.

You sound like a doom sayer who'd stop investing in the USA because the current admin has made it an unsafe environment.


I'm optimistic that he will actually be a positive force in reforming how the city operates. I think he is pragmatic in that he understands that efficiency in government administration is something that progressives have insufficiently prioritized. His policies are more populist than I'd prefer, but I think not the crazy socialist fever dream that Rs portray it as. The scariest thing for me is the prospect of active sabotage from the federal level, although I don't know how much they have held back.


The gov't may try to fuck with NYC using ICE or whatever, but honestly I think the fears about federal funding are overblown.

NYC generates like 2+ trillion GDP all on its own. It is the largest metropolitan economy in the world let alone the United States. I don't know how much NYC actually depends on federal money, but if there's any city that has a chance to figure out how to make it through a government funding squeeze, it's NYC.

Honestly I think the only recourse the fed has to put pressure on NYC is the actual gestapo shit they've already been pulling in Chicago.


NYC will riot french style if ICE moves in en-masse


That won’t be enough to stop ICE.


> think not the crazy socialist fever dream that Rs portray

That's because he's a democratic socialist, not a communist like they want people to think. If people really looked into the policies of the DSA they would support it. There is a reason Einstein, Keller, and more were adamant supporters.


[flagged]


"Comrade" didn't begin with the USSR.


So what? It is used almost exclusively by communists, in Hollywood and in real life.


In a lighter vein, let me suggest reading the Psmith series by Wodehouse. If not the entire series then Leave It To Psmith, at least.


You may be shocked by this, but comrade has been in use since the French Revolution, in fact it doesn't mean "friend" like most think - it quite literally means "fellow party-member", someone who is a member of the same party. You, yourself, are comrades with your fellow party members, even if they are not communist. Even if you go to the root of the word, it's Spanish in origin. It's an egalitarian/gender neutral word similar to 'colleague' or "coworker" but effectively it _just_ means "ally" in modern parlence.

Even if you require the link to communism, 'comrade' in the popular sense _is_ used by _socialists_ to describe one another, not just for communists, communists are just a subset ideology of socialism. Similar to anarchists, progressives, and more on the umbrella of "the left", communists are just another branch on the tree of ideologies, and as a branch, they used their mother's language of comradery to describe their fellow party-members and allies.

You can always admit when you are prejudiced by assumptions, you know, so I hope you take an interest in reading this article: https://en.wikipedia.org/wiki/Comrade

Edit:

> The distinctions between socialism and communism are rather academic and irrelevant in the long run

That's quite literally the biggest difference between socialism and communism, the long run. Communists want a communist society as the end-goal of socialism; Socialists do not have that hope, in fact most are not focused on the end goal as we can't ever ascertain what that would look like - so they focus on the values of socialist ideas right now, what we can do now to ensure equality, freedom, and personal rights by protecting all living beings in health and sickness, success and failure. A society of equals first.


I appreciate that you put a lot of thought into your response but I think you missed the plot. I know damn well what "comrade" means. It's one of those words with a stereotype attached to it. There are lots of words that change meaning in that way. I could call someone "my dear _" and people will assume that I'm talking to a romantic interest, because it's so weird to use the expression now in normal conversation. Likewise, if you shout out "I'm so gay!" the first thought people will have is that you are a homosexual, rather than you are in a good mood.

>That's quite literally the biggest difference between socialism and communism, the long run.

Without getting into a huge discussion on this, books have been written to try to draw a line between these two things. Ultimately they refer to the same thing, a deviation from a free market and society. To support people who have less, they must steal from those who have more. Socialism or socialist policies (such as the type we have in the United States, not the kind that most original socialists were writing about) is like a concerning lump that might turn out to be nothing more than a nuisance. Communism is Stage 4 cancer.

>A society of equals first.

This is easiest to achieve when people have a certain amount in common. But even among the most homogeneous society, differences are ever present and naturally result in different outcomes. The only sense in which we can fairly approach equality is in being equally protected by the law. If you insist on siphoning off the financial resources of those who provide valuable services to benefit others merely for existing, the worse off everyone is going to be. Many books have been written to prove that this is the case. Helping people who have experienced some kind of unforeseen setback is fine, up to a point, but I think that ought to be voluntary too.



The FBI infiltrates everything, apparently. That doesn't mean that these lunatics are all fake or insincere.


Is your username a reference to the Ego Anarchists?


It’s a reference to max stirners magnum opus.


How is a multiple square-kilometer radiator not just an inevitable Kessler syndrome disaster?

Edit: Some back of the envelope calculation suggests that the total cross-sectional area of all man-made orbiting satellites is around 55000 m^2. Just one 4km x 4km = 1600000m^2 starcloud would represent an increase by a factor of about 300. That's insane.


Sounds like a "slippery slope" fallacy without further explanation.


Not sure what the slippery slope is here. The linked page imagines a 4km x 4km radiator/solar array. The cross-sectional area of the array is going to be directly proportional to the probability of impacting high velocity space debris. In such an event the amount of debris that would be generated could also scale with the area of the array. This seems bad


> This seems bad

e.g., Cianide seems bad, but it won't kill you if the relative volumes are small.

tl;dr: You haven't characterized the denominator.


See my edit. Just one starcloud would represent an increase in a risk factor of over 300 c.f. status quo. Then multiply that by the number of starclouds you think would be deployed.


You still keep playing with the numerator.

> increase in a risk factor of over 300

Even with a numerator-only view, I suspect it's not fair to characterize the "risk factor" as going up 300x. There's a lot more nuance about orbits in space.


Tell me the nuance then. If people have concerns about Kessler syndrome at the starlink scale then why wouldn't something literally 1000x bigger be even more concerning.


I already did. Your reply/edit merely repeated your prior observation.

Getting back to the point:

You literally claimed that one of these would "inevitabl[y]" trigger a Kessler effect with no proof.

> something literally 1000x bigger be even more concerning.

Again, this isn't convincing if you don't have the denominator/context. Think about it: you still can't answer how many of these are needed to trigger the Kessler effect.

BTW, "increase by a FACTOR of about 300" != "increase in a RISK FACTOR of over 300"


I know this in the same way that even though I don't know the exact credence to assign the probability of particular bad effects from global warming, I can confidently say that an increase by a factor of 1000 of the CO2 emissions would be a bad thing. This is not because I have done a simulation, but instead my beliefs are based on the assumption that while concerned experts might be wrong in the details, they are probably not wrong with a gap of 3 orders of magnitude.


I simply do not care if advertisers form an accurate view of my desires and beliefs.


Sorry, I'm going to be critical:

"We follow a strict 5-phase discipline" - So we're doing waterfall again? Does this seem appealing to anyone? The problem is you always get the requirements and spec wrong, and then AI slavishly delivers something that meets spec but doesn't meet the need.

What happens when you get to the end of your process and you are unhappy with the result? Do you throw it out and rewrite the requirements and start from scratch? Do you try to edit the requirements spec and implementation in a coordinated way? Do you throw out the spec and just vibe code? Do you just accept the bad output and try to build a new fix with a new set of requirements on top of it?

(Also the llm authored readme is hard to read for me. Everything is a bullet point or emoji and it is not structured in a way that makes it clear what it is. I didn't even know what a PRD meant until halfway through)


> So we're doing waterfall again?

I think the big difference between this and waterfall is that waterfall talked about the execution phase before the testing phase, and we have moved past defining the entire system as a completed project before breaking ground. Nothing in defining a feature in documentation up front stops continuous learning and adaptation.

However, LLMs and code breaks the "Working software over comprehensive documentation" component of agile. It breaks because documentation now matters in a way it didn't when working with small teams.

However, it also breaks because writing comprehensive documentation is now cheaper in time than it was three years ago. The big problem now is maintaining that documentation. Nobody is doing a good job of that yet - at least that I've seen.

(Note: I think I have an idea here if there are others interested in tackling this problem.)


> So we're doing waterfall again?

The waterfall we know was always a mistake. The downhill only flow we know and (don't) love was from someone at DOD who only glanced at the second diagram (Figure 2) in the original 1970 Royce paper and said "This makes sense, we'll do it!" and... we're doing waterfall.

So, go to the paper that started it all, but was arguing against it:

- https://www.praxisframework.org/files/royce1970.pdf

I encourage you to look at the final diagram in the paper and see some still controversial yet familiar good ideas:

  - prototype first
  - coding informs design
  - design informs requirements
  - iterate based on tests -> design -> requirements (~TDD)
Crucially, these arrows go backwards.

See also the "Spiral Model" that attempts to illustrate this a different way: https://en.wikipedia.org/wiki/Spiral_model#/media/File:Spira...

Amazing that waterfall arguably spread from this paper, where it's actually an example of "what not to do."

Here's what Royce actually says about the waterfall diagram:

The implementation described above is risky and invites failure. … The testing phase which occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from analyzed. These phenomena are not precisely analyzable. … Yet if these phenomena fail to satisfy the various external constraints, then invariably a major redesign is required. … The required design changes are likely to be so disruptive that the software requirements upon which the design is based and which provides the rationale for everything are violated. … One can expect up to a 100-percent overrun in schedule and/or costs.

This is 55 years ago.


That "Spiral Model" sure looks like an OODA loop.


Waterfall is what works for most consulting businesses. Clients like the buzz of agile but they won't budge on scope, budget or timeframe. You end up being forced to do waterfall.


Yep. And you often end up doing waterfall with a veneer of agile that ends up being worse than either one.


this has been my experience too, its horrible because everyone does all the agile meetings and "planning" but its just used as progress reporting to the product managers... if thats all thats 'agile' being used for just do daily reporting and be done with it


Waterfall might be what you need when dealing with external human clients, but why would you voluntarily impose it on yourself in miniature?


Because agile is a project management process, not an engineering practice. The value of sprints is in delivering product at the end of every sprint. If that's not happening, because the client isn't interested, and you're not getting product feedback from your customer who is the only person whose feedback actually matters, and using that feedback to determine the tasks that go into the next sprint (including potentially cancelling tasks for work the customer is no longer interested in), then you're actually slowing the project down by forcing people to work on fit and finish every sprint before they need to (i.e. project completion).

That's not to say that you shouldn't anyway have good engineering practice, like short-lived branches and continuous integration. But you should be merging in branches on a schedule that is independent of sprints (and hopefully faster than the sprint length).


OP here. I wouldn't necessarily call it a waterfall, but it's definitely systemized. The main idea was to remove the vibe from vibe coding and use the AI as a tool rather than as the developer itself. By starting off with knowing exactly what we want to develop on a high(ish) level (= PRD), we can then create an implementation plan (epic) and break it down into action items (tasks/issues).

One of the benefits of using AI is that these processes, which I personally never followed in the pre-AI era, are now easy and frictionless to implement.


I think for me personally, such a linear breakdown of the design process doesn't work. I might write down "I want to do X, which I think can be accomplished with design Y, which can be broken down into tasks A, B, and C" but after implementing A I realize I actually want X' or need to evolve the design to Y' or that a better next task is actually D which I didn't think of before.


The way it totally disregards the many explicit instructions given in the "four panel" comic strip.


Right? Came to the comments specifically for this, but am confused by people's responses. With prompt adherence this bad, is it worth the 2 cents you spent on it? I don't see how it's even useful for deciding if you want to use the ultra version, or for anything else really.... Maybe if you want to redo it in Photoshop? But at that point, breaking out the old Wacom tablet and making a composite image would probably be just as time intensive, but with much higher image quality (and none of the tale tell signs of AIgen)


Even if you only earn $12/hour, 2 cents is worth it to save just 6 seconds.

An image has to be much worse than that to fail to save you 6 seconds.

That said, this is their own chosen example of what it can do, so I'd have to assume it is much worse than that on average.


Will this save me 6 seconds? It'll take me longer than that to come up with a prompt, type it, enter it into the service, wait for it to generate, download it...

And again, if I can't use it because it's totally wrong, then... what are we even doing here?


> Will this save me 6 seconds? It'll take me longer than that to come up with a prompt, type it, enter it into the service, wait for it to generate, download it...

It will probably save a lot more, but the point is 6 seconds is the threshold at which 2 cents is "worth it".

Good art takes a long time to create.

If this image were representative, errors and all, it would be where you could expect a professional to reach after an hour or so, give or take — I've seen professionals working on an icon set for multiple days, and most webcomics I see, even when it's their full time job and they've got a good system going to make their output easy for themselves, don't tend to do produce outputs like this should have been more than once per day.

> And again, if I can't use it because it's totally wrong, then... what are we even doing here?

On this, I tend to agree. If you have a specific output in mind, quite often they're just wildly wrong. Repeated generations are just plain bad, and the system just can't seem to get what's being asked for.


> Imagen 4 Ultra: When your creative vision demands the highest level of detail and strict adherence to your prompts, Imagen 4 Ultra delivers highly-aligned results.

It seems that you may need the "Ultra" version if you want strict prompt adherence.

It's an interesting strategy. Personally, I notice that most of the times I actually don't need strict prompt adherence for image generation. If it looks nice, I'll accept it. If it doesn't, I'll click generate again. For creativity task, following the prompt too strictly might not be the outcome the users want.


I've found this is an interesting balance with Copilot specifically. Like, on the one hand I'm glad it aims for the bare minimum and doesn't try to refactor my whole codebase on every shot... at the same time, there's certain obvious things where I wish it was able to think a bit bigger picture, or even engage me interactively, like "hey, I can do a self-contained implementation here, but it's a bit gross; it looks like adding dependency X to the project keeps this a one liner— which way should it go?"


Give me a 'precision' slider then. On one end it should do precisely what you asked, to a T, even if what you asked for is dumb, and on the other end it should try to capture the spirit of what you wanted plus any obvious oversights.


I’ve had good experience with iterative prompting when generating images with Gemini (idk which model — it’s whatever we get with our enterprise subscription at work, presumably the latest.) It’s noticeably better than ChatGPT at incorporating its previous image attempt into my instructions to generate the next iteration.


Though that was only Imagen 4 Fast, not Imagen 4 or Imagen 4 Ultra.


Same for the poster. Asks for the ship to be going towards the right, and it's clearly doing the opposite


As seen from the AI's perspective.


To the left of the "detailed spaceship" I think I see a distortion pattern reminiscent of a cloaked Klingon bird of prey moving to the right. Or I'm just hallucinating patterns in nebular noise.


The ship is reminiscent of Galactica's oldschool vipers. Different, but very similar overall structure.


Hopefully it's better than midjourney at least. Ignoring key parts of the prompt seems to be a feature.


Midjourney scores the absolute lowest in terms of prompt adherence against any of the other SOTA models (Kontext, Imagen, gpt-image-1, etc). At this point, its biggest feature is probably as an "exploratory tool" for visualizations by cranking up the chaos and weirdness parameters.


In the little experimentation I did with AI image generation, it seems more a game of trying multiple times until you get something that actually looks right, so I wonder how many attempts they did.


Fundamentally we still have the flat namespace of top level python imports, which is the same as the package name for ~95% of projects, so I'm not sure how they could really change that.


Package names and module names are not coupled to each other. You could have package name like "company-foo" and import it as "foo" or "bar" or anything else.

But you can if you want have a non-flat namespace for imports using PEP 420 – Implicit Namespace Packages, so all your different packages "company-foo", "company-bar", etc. can be installed into the "company" namespace and all just work.

Nothing stops an index from validating that wheels use the same name or namespace as their package names. Sdists with arbitrary backends would not be possible, but you could enforce what backends were allowed for certain users.


Kind of have one with the missing image benchmark: https://openai.com/index/introducing-gpt-5/#more-honest-resp...


As opposed to a hypothetical scenario where it is legal to participate in illegal transactions?


I don't think a chipotle burrito is actually 1600 calories unless you do something non-standard. Probably 800-1100


Fair enough but people are eating those daily plus 2 more meals. They don’t need any additional food , even if they were running a marathon every day


You're mistaking active calories as the only calories you burn a day. There's a reason the caloric intake recommendation is ~2kcal/day, because your RMR (resting metabolic rate) is ~2kcal/day. Meaning if you were to literally sit in bed all day, you are "burning" ~2kcal/day. On top of that you have your incidental energy expenditure per day (TDEE), aka walking to the fridge, taking a shit, general locomotion, etc. You then have active calories burned on top of that. If you were to run a marathon, that means you would need to eat ~5kcal in one day to break even.


Yes I’m advocating eating less. It sounds like we agree with each other. What did I miss?


If someone would have eaten 3 meals for a sum of 2000 cal, with one being a 500 calorie meal, and they decide to switch that out with a 1600 calorie chipotle burrito, then the excess calories they need to burn are only 1100. Which a half marathon would more or less cover.

The situation you described only makes sense if they're eating 2000 calories worth of food during the day then adding an extra chipotle burrito.

Re: your 2nd marathon comment, if someone is eating two 800 calorie meals a day plus an extra 1600 calorie burrito, that comes out to 3200 calories. Minus the 2k resting expenditure, minus the 2k marathon expenditure, they're 800 calories short so they do need to consume more.

Is, I believe, the argument GP is making.

I don't disagree with you, I think the amount of exercise most people are capable of doing on a daily basis the extra calorie needs are insignificant. But I think your examples overstate the point.


we agree then that eating isn't necessary.


plus shrinkflation


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: