Before, lines of code was (mis)used to try to measure individual developer productivity. And there was the collective realization that this fails, because good refactoring can reduce LoC, a better design may use less lines, etc.
But LoC never went away, for example, for estimating the overall level of complexity of a project. There's generally a valid distinction between an app that has 1K, 10K, 100K, or 1M lines of code.
Now, the author is describing LoC as a metric for determining the proportion of AI-generated code in a codebase. And just like estimating overall project complexity, there doesn't seem to be anything inherently problematic about this. It seems good to understand whether 5% or 50% of your code is written using AI, because that has gigantic implications for how the project is managed, particularly from a quality perspective.
Yes, as the author explains, if the AI code is more repetitive and needs refactoring, then the AI proportion will seem overly high in terms of how much functionality the AI proportion contributes. But at the same time, it's entirely accurate in terms of how this is possibly a larger surface for bugs, exploits, etc.
And when the author talks about big tech companies bragging about the high percentage of LoC being generated with AI... who cares? It's obviously just for press. I would assume (hope) that code review practices haven't changed inside of Microsoft or Google. The point is, I don't see these numbers as being "targets" in the way that LoC once were for individual developer productivity... there's more just a description of how useful these tools are becoming, and a vanity metric for companies signaling to investors that they're using new tools efficiently.
The overall level of complexity of a project is not an "up means good" kind of measure. If you can achieve the same amount of functionality, obtain the same user experience, and have the same reliability with less complexity, you should.
Accidental complexity, as defined by Brooks in No Silver Bullet, should be minimized.
> The overall level of complexity of a project is not an "up means good" kind of measure.
I never said it was. To the contrary, it's more of an indication of how much more complex large refactorings might be, how complex it might be to add a new feature that will wind up touching a lot of parts, or how long a security audit might take.
The point is, it's important to measure things. Not as a "target", but simply so you can make more informed decisions.
a bit of a nit but accidental complexity is still complexity, so even if that 1M lines could reduce to 2 kines its still way more complex to maintain and patch than a codebase thats properly minimized and say 10k lines. (even though this sounds unreasonable i dont doubt it happen..)
If tech companies want to show they have a high percentage of LoC being generated by AI, it's likely they are going to encourage developers to use AI to further increase these numbers, at which point is does become a measure of productivity.
> It seems good to understand whether 5% or 50% of your code is written using AI, because that has gigantic implications for how the project is managed, particularly from a quality perspective.
I'd say you're operating on a higher plane of thought than the majority in this industry right now. Because the majority view roughly appears to be "Need bigger number!", with very little thought, let alone deep thought, employed towards the whys or wherefores thereof.
A higher plane of thought would be "was AI able to remove 5% or 50% of the code while keeping or adding functionality and not diminishing clarity, consistency, and correctness"
I don't think the author is missing this distinction. It seems that you agree with him in his main point which is that companies bragging about LOCs generated by AI should be ignored by right-thinking people. It's just you buried that substantive agreement at the end of your "rebuttal".
> would assume (hope) that code review practices haven't changed inside of Microsoft or Google.
Google engineer perspective:
I'm actually thinking code reviews are one of the lowest hanging fruits for AI here. We have AI reviewers now in addition to the required human reviews, but it can do anything from be overly defensive at times to finding out variables are inconsistently named (helpful) to sometimes finding a pretty big footgun that might have otherwise been missed.
Even if it's not better than a huamn reviwer, the faster turnaround time for some small % of potential bugs is a big productivity boost.
> under the guise of 'improving the user experience' or perhaps minimalism
I think we can be more charitable. Don't you see, even here on HN, people constantly asking for software that is less bloated, that does fewer things but does them better, that code is cost, and every piece of complexity is something that needs to be maintained?
As features keep getting added, it is necessary to revisit where the UX is "too much" and so things need to be hidden, e.g. menu commands need to be grouped in a submenu, what was toolbar functionality now belongs in a dialog, reporting needs to be limited to a verbose mode, etc.
Obviously product teams get it wrong sometimes, users complain, and if enough users complain, then it's brought back, or a toggle to enable it.
There's nothing to be cynical about, and it's not something we "should be over by now." It's just humans doing their best to strike the balance between a UX that provides enough information to be useful without so much information that it overwhelms and distracts. Obviously any single instance isn't usually enough to overwhelm and distract, but in aggregate they do, so PM's and designers try to be vigilant to simplify wherever possible. But they're only human, sometimes they'll get it wrong (like maybe here), and then they fix it.
>As features keep getting added ... so things need to be hidden
Here lies the problem
Your first users were here without all the added features. It's very likely they didn't need those features to use the software. Then you add new features and clutter it... then remove features from the UX that they were using.
> then remove features from the UX that they were using
Yeah but like I said, people make mistakes. The thing about text output is that it's impossible to track if people are using it at first. You can measure button clicks and key presses. You can't measure eye gaze (at least not usually!).
The good news is, if you remove it and get complaints, you can measure complaints. If you put in a toggle to re-enable it, you can measure how many people activate the toggle. Then you actually have the data, so you can decide whether to just bring it back entirely, or keep it as a toggle, or what.
PM's and designers aren't omniscient. If a feature is view-only, you literally can't tell how much it's used, and it might be minor enough that you never ask about it in user interviews.
This is a common failure complain of all large companies that use metrics but don't actually talk to people. It's not a new complaint at all. And at the day it's one only solved by users throwing an apocalyptic fucking fit at the group producing the software.
>If a feature is view-only, you literally can't tell how much it's used
Large companies do talk to people. User interviews are extremely common -- it's standard practice that you need to use metrics and talk to users. But like I said, interviews aren't going to cover every tiny little detail.
PM's and designers usually do think twice. But they're human, they're not omniscient. So maybe show a little grace?
There is only so much time in a day. Often singing in choir conflicts with playing sports because you have concerts and games on the same nights, so you have to make a choice. There are also schedule pressures - if you are going to get into college you nearly have to take math, English, science, and foreign language classes beyond what your school demands and that forces hard choices if there even is a class period free (don't forget you might be taking band to take up that space)
Finally, there are a lot of bad teachers. They are so interested in winning competition and teaching perfection - but for most music will never be anything other than a fun hobby and so they are getting the wrong teaching which turns many students off. I've seen a lot of award winning school choirs, and the next town over with the same number of students has twice the students in choir despite not winning awards - communities need to pick and often don't realize this.
In my secular chorus, we may have two main performances per season, but we rehearsed together for 2 hours every single week for months. We purchased polo shirts, and there was a dress code. Our dues covered operating costs and sheet music. Being a civic group for casual singers, our costs were kept low, but many choirs travel, double down on the costumes, and many people find it requires a high level of dedication, free time, and independent wealth. It is no coincidence that many members are retirees!
In church, a lector could prepare for 30 minutes and have a 5 minute speaking part at the Mass. The ushers and EMHCs also have part-time gigs. While altar servers are on duty for the entire service, they do not need to rehearse every single week. A church choir may serve for one or more weekend services, plus the 90-120 minute weekly rehearsals, and that's not counting holy days, Easter, Christmas. If you take a role as cantor, director, or piano/organ, expect to become indispensable! Some families just found it easier to join en masse so they could stay together.
Some chorus members are also secretly voice coaches, so if you protest "but I can't sing tenor" they may lovingly tackle you and sell you a package of private lessons.
I found it difficult to serve in any other ministry alongside choir, and you may find it difficult to hold down a job and/or family alongside a secular chorus role. As I said, high demand/great rewards.
In my part of the world, it's definitely declining church attendance. If you don't have a huge population of young boys and girls trained in those choirs, and instead the population were self-selecting into voice training, I would definitely expect a serious sex imbalance.
I've found some excellent vocal groups from parts of Asia not known for being especially Christian, but it seems that choral/a cappella music is very connected to Christianity there too.
I believe church attendance is getting more gendered too, with a shortage of men.
N=1 and not singing in a choir, I could probably sing tenor parts if I trained, but to me bass feels more satisfying, doubly so going low with throat singing.
As a professional-level baritone who has sung tenor parts quite a lot, there is a shortage of every low voice type (directors are often conflicted when I make the offer to sing tenor). People who can produce a chorally-acceptable A or Bb are in the shortest supply, though. It's getting worse as the amateur singing circuit gets smaller and the gender ratio gets more skewed.
Amateur-level choirs tend to have a lot more basses than tenors because it is easier to sing bass without effort spent on vocal training.
I am an amateur baritone, in school I was used for tenor parts because of course there was a shortage and I had good enough technique that I could sing tenor parts, if not well.
Now, I sing second bass for a men's choir, because that was what they were missing. I think all not in-the-middle voices are scarce.
Tenor parts are more difficult, technically speaking, and voices capable of the tenor range are rarer. So any given man joining a choir can more likely manage the bass range, and if they can, they can almost certainly manage the bass parts.
FTA:
> When men do join singing groups, they often avoid the tenor section. The tenor voice is “a cultivated sound”, says John Potter, author of a book on the subject. A man with no vocal training is more likely to have the range of a baritone (a high bass). It does not help that the tenor voice is associated with operatic stars such as Luciano Pavarotti, who could powerfully sing high notes that no amateur can easily reach. And the tenor line in classical choral music can be difficult, with many unexpected notes and alarming leaps.
surely a real bass is rarer? I just assume, as someone completely musically inept, based on listening to the vocal groups on the radio, that a bass contributes less, and can be omitted more easily?
People at the extreme ends of the spectrum of range are rarer and people in the middle of the range are more common. As it stands, choral bass parts fit better into untrained voices than choral tenor parts. A typical baritone (middle range male voice) can sing choral bass parts well enough, but will find tenor parts relatively strenuous.
I know many women who admit they "fall in love" anytime they hear a low bass. They might marry a tenor and never cheat on them, but every time their hear a low bass their heart flutters. Men know/see this and so tenors become less interested since their higher voices don't get the women (there are plenty of other ways they have).
Anecdotal I guess, but when I was in a high school choir, I loathed that my teacher assigned me to the tenor section. It did not fit with the image of myself that the high school version of me held in my head; "a man should be a baritone or bass after puberty!"
I liked choir and stayed in it for all four years, but I was never particularly good at it so what the hell did I know anyway.
"average bass steals all the love interests" factoid actually just statistical error. average bass steals 0 love interests per year. John Tomlinson, who steals 10,000 paramours per year, is an outlier and should not have been counted.
> and the best the algorithm could do will be limited by the uncertainty in estimating those values
That's relatively easy if you're assuming simple translation and rotation (simple camera movement), as opposed to a squiggle movement or something (e.g. from vibration or being knocked). Because you can simply detect how much sharper the image gets, and hone in on the right values.
Oof, I hope not. I wonder if the architecture for GPU filters migrated, and this feature didn't get enough usage to warrant being rewritten from scratch?
This is actually kind of hilarious. That your ex-wife would write to the FBI to denounce your character a couple of months after the divorce.
I did really enjoy this detail:
> It was an extremely ugly, long (2 years!) divorce hearing: it made the newspapers because of Bell’s allegations of “extreme cruelty” by Feynman, including the notion that he spent all of his waking hours either doing calculus and playing the bongos.
Brilliant guy... but it is funny to think how nonstop bongos could definitely drive a spouse crazy.
> Don't use products from large US tech companies?
What does large have to do with it? Why do you think smaller companies are any more likely to resist? If anything, they have even less resources to go to court.
And why do you think other countries are any better? If you use a French provider, and they get a French judicial requisition or letters rogatory, then do you think the outcome is going to be any different?
I mean sure if you're avoiding ICE specifically, then using anything non-American is a start. But similarly, in you're in France and want to protect yourself, then using products from American companies without a presence in France is similarly a good strategy.
> Why do you think smaller companies are any more likely to resist? If anything, they have even less resources to go to court
Somehow smaller companies do resiste much more. Examples: Lavabit refused to expose Snowden, Purism offers SIM-cards protecting you from tracking ("AweSIM").
No, some smaller companies do. Plenty don't at all. Apple is a gigantic company and known for being super privacy focused, keeping your information encrypted to protect it from governments wherever they can.
So what makes you think size has any relevance here?
> Apple is a gigantic company and known for being super privacy focused, keeping your information encrypted to protect it from governments wherever they can.
I wouldn't trust this marketing. Companies are on the users' side when the competition is strong. Apple is practically a monopoly. See my other comment with examples how it doesn't care about users.
I literally can't tell what the author is arguing against or for.
All the example table images seem fine, and have no captions saying whether they're supposed to be examples of good usage or bad usage.
So either I have no idea what "bad" examples of icon usage are because the author doesn't show any, or the author thinks some or all of them are bad when, to me, the icon+text+color examples seem great (and one figure caption indicates icons+labels are best)?
Yet the author continues to argue against icons and to use text instead? But never says whether icons+labels are actually better than just text, so we should use them in combination?
I'm baffled. For an article arguing for greater clarity, the article itself couldn't be less clear.
In a data grid or table the relative cognitive load of the page is already very high. Adding iconography to the table body content is often unnecessary and increases visual noise, processing requirements, and generally reduces readability/scanability.
I've always felt that icons in this context are a risk or liability instead of a strength. I decided to info dump my findings to my team then published it as an article.
I probably could use a good editor to help me next time!
I think you need to show examples of this bad usage, then.
Your first image has zero icons in the rows. It has album covers but those aren't icons.
Your second and third images show very usage that combines text, color, and relatively standard icons like checkmarks, X's, or in-progress. These are good and if you're trying to suggest these reduce readability, scannability, or add noise, then I'm frankly baffled.
Your third image also shows profile images, but again those are not icons.
So what are you arguing against? I can't ever remember coming across icons in a datagrid that "added to the relative cognitive load". And if you're arguing against checkmarks or X's, I don't think your arguments hold up.
But even with "real" icons -- like, I've seen icons to show if a software package is for Windows or Mac or Linux. If a row is a TV show or a movie. If it's one file or an archive. If there's a PDF file download attached. An alarm icon for something past due. But these all seem totally fine and helpful. They're generally linked to a major feature of the platform that everybody understands, and help scannability.
Without clear examples of what you're arguing against, I'm frankly completely lost.
I don't think you're understanding. The point is that 20 people in a row will take advantage of your buffer to slow you down again and again and again, which makes you get to your destination later... because they're being selfish to get somewhere faster, and you're not so you get to where you're going slower.
We're not talking about where they're changing lanes to take the next exit. We're talking about where your lane happens to be moving faster, so they merge in front of you in an unsafe way to take advantage of that and just stay there. Why should you be expected to give them space, as you suggest? How is that fair, that they should get to their destination faster instead of you? Do you not see how that's going to rightfully make someone angry? When they should be waiting for a safe space to open up, rather than forcing you to slow down to create one?
I understand perfectly, 20 years driving, I think people just don't like that the safe answer is to be slow. You will not fix others behaviour, so your options are be slow and generous, get out of the chaotic lanes (unless that's all of them), or join them and be aggressive, claim space, be stressed and annoyed your whole trip.
There is no solution to traffic here sorry, this is more about managing your own frustration and expectations when faced with people at their worst, in the worst form of transport.
The total, confirmed, 100% effective solution is to never commute by highway during peak hours, but few get that option.
I object to the "late" argument made by etho's parent. The difference in time to destination will inevitably be dominated by lights, in city travel, not by modest speed differences (say 45 vs 55) on a highway. Being safe & out of the way is the trick! It would be nice if we got rid of left & u turns and build our roads for that!
The subsidies for cars is crazy when you look at it from that perspective. What you need to do is invest a lot of money in areas and systems that can make it better over time. In the end you are going to spend less.
Ehto is correct and this is the way. I'll go further and say that if someone is tailgating you and it's pissing you off, generously let them pass. Literally pull to the side of the road if you must.
The issue is that when you slow down, you’re (a) creating ‘turbulence’ in the traffic flow with increased speed differential between cars and increased lane changes, which increases accident risk for everybody, and (b) it’s not even solving the problem because you still perpetually have some impatient driver wedging themself in directly in front of you, deleting your buffer zone.
It’s safer to drive a little closer, keep up with other traffic and defend what gap you can in front of you.
Agree with your conclusion here, though. The best response is to simply not drive in this kind of traffic.
Hard disagree. It is not safer to ignore your safety buffer. It is certainly not safer to defend your buffer.
If traffic is very busy, the trick is to just accept people will wedge in front of you and keep going slightly smaller each time to increase the buffer again. You might create 'turbulence', which might possibly decrease the safety a bit for all the impatient drivers doing the wedging. But it increases your own safety. And therefore also that of the people following you and your passengers.
I'm also not convinced on the 'turbulence' part. Keeping a buffer smoothes out any sudden speed variations of the people in front of you, which makes the traffic behind you flow better.
And it might maybe feel a lot slower to let a 100 cars go in front of you on your commute, but just driving 99km/h when the person in front of you does 100 is enough to increase your gap and it makes a whopping 1% of difference.
The only thing is: sometimes a road is just too busy and the space for a buffer just isn't there to begin with. At that point the speeds should go down to accommodate the smaller buffers, which is actually what happens here in the netherlands as long as there aren't too many people ignoring the speeds advisory boards above the highway.
> The issue is that when you slow down, you’re (a) creating ‘turbulence’ in the traffic flow with increased speed differential between cars and increased lane changes, which increases accident risk for everybody, and (b) it’s not even solving the problem because you still perpetually have some impatient driver wedging themself in directly in front of you, deleting your buffer zone.
That's very obviously not true. Slowing down always reduces energy in the system and always reduces global turbulence. It's one of the reasons that countries that lower speed limits see journey times reduce.
Is there a statistics name for the last part? I'd like to compare different countries. It's definitely NOT true in Colombia at least, which makes me believe OP more.
We in Colombia had a public service announcement where it showed someone driving really fast (while still respecting semaphores), and another one going with just enough speed. In the end, they both reach the last semaphore almost at the same time and then they part ways. Essentially it shows that driving crazy fast in the city doesn't necessarily gets you faster to your destination.
Now that I'm an adult, I tested it several times, and it matches 90% of my attempts, but that's in the city, with semaphores. No way I'd think letting everybody steal everybody else's buffer would provide for a reduction in journey time, even in highways. You're adding items to a queue, it'll take longer.
Now, it is probably safer, but we can only take so much even if we are not in a rush.
Slowing down on a busy highway does not reduce turbulence at all, it add chaos and unpredictability to the system. Once car suddenly slowing down to create a buffer zone causes the car behind to slow more and more and can often lead to a stop further back. This has been proven time and again on closed loop systems studying highway traffic flow. They are known as "phantom" traffic jams or shockwave traffic jams. Example, https://www.newscientist.com/article/dn13402-shockwave-traff...
Yes, and they are caused by sudden decelerations which are the result of many factors, including driving too fast for the conditions, roadway, and traffic, and tailgating.
> Slowing down on a busy highway does not reduce turbulence at all,
The only thing that reduces global turbulence reliably on any roadway is reducing speed. All the simulations and real-world implementations show this. It's unambiguous and uncontroversial, except that it requires drivers to slow down, which is politically untenable in many jurisdictions.
> Slowing down on a busy highway does not reduce turbulence at all, it add chaos and unpredictability to the system. Once car suddenly slowing down...
I agree that slowing down "suddenly" causes turbulence. However, slowing down *gradually* allows you to build up a safety buffer which in turn allows you to avoid slowing down suddenly.
It is more dangerous to be slow and have people constantly merging in front of you, rather than be slightly faster and not have all the merging. Accidents happen when vehicles are going different speeds, all things equal.
Obviously it is safer to have longer follow distances, all things equal. But you don't accomplish that if you leave a long follow distance that is cut off a few seconds later by another car trying to get ahead. You end up with a constant stream of cars cutting your follow distance to less than what it would have been if you had just stayed slightly closer to the car in front of you.
We don't live in an ideal world, and having a bunch of cars merging in front of you definitely makes you less safe than having a static situation. I try to make sure I can see through/around the car in front of me, so that I have advance notice of what's happening down the road.
American road laws are insane here. The law should be simple; you must be in the outside lane at all times unless you are overtaking, and once you're done overtaking, you should merge back into the outside lane.
As far as I know that’s the law in every state I’ve driven in, but enforcement is pretty much nonexistent. Some states like Texas or Louisiana might have signs reminding people to stay out of the inner lanes except for passing but I’ve never heard of anyone getting a ticket over it. What’s enforcement like in the UK?
That used to be the case in Ireland too, but confusion due to cultural contamination means pretty much everyone moved to numbering lanes (from the "outside"/"slow"/leftmost lane).
When I did my B license test probably about 30 years ago, the Rules of the Road all referenced inside/outside lanes. When I did my CE license last year, it had been updated to only use lanes 1, 2, 3 etc.
Obviously fast and slow are just colloquial terms.
Why? If everyone followed the rules the lanes would segment into slowest on the right, with gradually increasing speed to the left and people moving between the lanes as needed to overtake. It would be far far far better than the chaos of having to move across all the lanes of traffic all the time because there are random campers driving below the speed limit in every single lane.
First, everyone switches right as soon as there's a gap in a righter lane, so lots of unnecessary switching. Second, the right lane is always full making it hard to merge on or off the highway. Third, the leftmost lanes are underutilized when they could be filled with people who have a long way to go until their offramp.
My decades-long impeccable driving record tends to indicate otherwise. I just don't drive as if I lived in the fantasy land where leaving a long follow distance means I have a lot of room in front of me. It doesn't. It means I get cut off, and the follow distance ends up being shorter than it would have been had I just been following at the same distance as all the other cars on the road.
It is possible of course that the highways you drive are just too busy and the max speed is actually set too high for how busy the road is. That happens more than you'd want because lowering the max of a highway is always an unpopular thing to, even if it's needed.
Still, I tend to find that people underestimate the danger of short distances. Often it's just better to accept a 100 cars going in front of you than to shrug off following someone at 1.5 seconds. It can go well for years because crashes are rare, but when you are in a crash you will be royally screwed when you don't have the reaction distance needed.
This assumes that you can actually maintain a 3 second follow distance. On some roads, you simply cannot, and an attempt to maintain such a distance leads to increased danger from all the cars that cut in.
Simply put: follow distance is not a unilateral decision.
If you actually want the safest option then you should merge all the way right and keep slowing down. Noone is going to merge right if they are trying to go faster, they will only do it to get off the offramp. Meaning the gap will reopen as people exit through the offramp or merge left into faster lanes.
If you choose to go in the fastlane in traffic you should understand that it will have people who do not care about the following distance as much and are just trying to go as fast as possible.
I have found that often times in heavy traffic the rightmost lane can be just as fast or actually faster than a middle or left lane.
> Noone is going to merge right if they are trying to go faster
In my experience even cars that are not trying to go faster will happily merge in front of you unsafely all the time, just because they don't understand the concept of a safe distance.
> If you choose to go in the fastlane in traffic you should understand that it will have people who do not care about the following distance as much and are just trying to go as fast as possible.
It's not about choosing to go in the fast lane. It's about the fact that in heavy traffic, you have no idea which lane will be fastest, because they're all heavy and which one is fastest keeps switching.
> I have found that often times in heavy traffic the rightmost lane can be just as fast or actually faster than a middle or left lane.
That's exactly my point. Which is why you can be in the right lane, and tons of people from the slower lane will try to merge in front of you if you're keeping a safe distance from the car in front.
Your advice is staying in the right lane doesn't apply in these situations.
This is a long thread of people talking past each other. The bottom line is simply this: if you want to drive with a larger-than-average following distance (call it whatever you want, a safety buffer, a "proper" following distance, the point is it is a distance less than the average following distance of the other drivers on the road) then you have to accept that you will not be able to drive at the same speed as the other traffic on the road. It's physically impossible. It can be psychologically frustrating because you see all the cars around you moving at X mph but your self-imposed constraints mean you can only make way at (X minus Y) mph. But them's the breaks, no pun intended
> It can be psychologically frustrating because you see all the cars around you moving at X mph but your self-imposed constraints mean you can only make way at (X minus Y) mph.
This is correct, but I get the sense that people overestimate Y.
Let's say you're driving 60 mph and following the "three second rule" which gives you a ~264 foot safety buffer. A driver then cuts into this safety buffer. Let's assume they like to go fast and enter closer to the front of the buffer so they reduce your safety buffer down to two seconds. In response, you gradually rebuild the safety buffer back to three seconds, costing you an extra second. Soon after you rebuild the safety buffer another car cuts in front of you. Let's say this process repeats every mile of your journey, costing you an extra second every time. This results in you traveling slightly over ~59 mph, making Y = ~1 mph.
Compare that to the lifetime odds of dying in a car crash in the U.S. which is roughly 1 in 100. It's hard to eliminate that entirely, but I'm willing to spend an extra ~1s per car that cuts in front of me to reduce it for myself and my passengers.
Not so. Keeping a constant distance from the car ahead means both cars are moving at the same speed. When a jerk cuts in, after a moment all 3 cars will be moving at the same speed.
We are saying the same thing. When a jerk cuts in, drivers readjust their speed to maintain desired following distance. Net effect, slower speed for all but the lead car
If you personally start with that slower speed to begin with (AKA much longer following distance), you don't have to worry about adjusting down
The fastlane is just another name for the leftmost lane, I am not talking about the one moving fastest.
Again we are not talking about the fastest lane here we are talking about the safest as the OP was concerned about following distance.
> That's exactly my point. Which is why you can be in the right lane, and tons of people from the slower lane will try to merge in front of you
If everyone merged right it would not longer be faster but people do not do this. In the right lane you can slow down as much as you want and never cause an issue so you can always make a gap. In any other lane if you slow down more than traffic you cause issues because people will then try and pass you from the right which is dangerous.
You are placing the burden of your forward following gap on the cars around you but that is a terrible way to drive. You need to be in control of yourself when driving, do not trust that someone is going to follow traffic laws, do not trust that they will go whatever way there turn signal says, do not trust that they will look over there shoulder before merging.
If YOU want a following gap then the only possible safe way to do this is to merge all the way right and slow down whenever someone merges in front of you. There is no other way to do it in heavy traffic. And YES you will have to live with the fact that you will be driving slower than the traffic around you. That's the trade you make if you choose to have a large following gap.
You’re the problem because of the way you are thinking. You don’t own the asphalt in front of you. You’re angry because in your mind you do, and you feel righteous about it. That’s why you are casting a moral judgement about them.
The most efficient throughput of the road system is not for people to “politely” queue up for 5 miles. People should be utilizing the rod and merging in an orderly manner. By adopting some arbitrary self imposed practice that is leading to 20 drivers cutting in front of you, are the one creating an unsafe situation.
Correct, but you _need_ the asphalt in front of you for both safe driving and also to avoid cascading hard braking events. I also don't own the asphalt under me or behind me too, so it's kinda a silly statement tbh.
You do, but if you leave so much that 20 cars are pulling in front of you, you’ve either driving too slow or misjudged and left unnecessary space. At 15-20 mph traffic speeds, you need 8-20 feet. Ideally, cars in a multiple lane to one lane exit scenario should be zipper merging when congestion reduces speeds. Engineers model this behavior and try to design roads to encourage it.
If you do that and get angry when people change lanes in front of you, you have consciously or subconsciously decided you own that 20 foot gap. That reaction impairs judgement and causes accidents.
To be completely clear the conversation is entirely not about zipper merging, but about people who are using safety gaps as opportunities for them to traffic weave attaining a faster than average travel speed at the cost of every one else's average travel speed and safety
Nobody is getting screwed. I've been the person making the gap many many times. You just ignore them, it isn't hard - they are way up there and I'm way back here, plenty of space. I just keep on with my business of safely driving. Sure I often wish I could go the speed limit - but in reality I'm going almost as fast as they are so it isn't like a few feet lost costs me anything. Odds are I'll be stopped at a red light and lose a lot more time once I get off the highway.
Besides, there are only a few people who ever merge in front of me (and then those who don't merge block their lane so nobody else can get in).
In high traffic you’re definitely being screwed - both by the continuing lack of a proper safety gap, and by not being able to go a normal speed. Which does add up in many of these situations.
But I guess we should just all self gaslight to feel better about it?
In Bay Area traffic I’d literally be not moving at all for most of the time if I followed you advice, in heavy traffic. That’s the exact situation I’m talking about.
If 20 people take advantage of your buffer, then you are delayed by a distance of 20 vehicle-lengths + 20 follow-distances. This is about 1000 meters, a distance which you can travel in about 45 seconds. So the net effect of all 20 people merging in front of you is less than a minute delay on your trip. Unless there is an almost constant stream of people merging in front of you, this isn't adding up to more than a percentage point of two of your whole trip.
I wonder what this would do to battery life -- continuously-on IR cameras are going to be a significant power draw. And then there's the question of whether the video processing is done on the earbuds, or how much Bluetooth bandwidth is used sending the video stream to your phone for processing.
Using this to detect gestures does seem very cool, however. Seems like a fascinating engineering challenge.
It would be incredibly useful for scrubbing through commercials during podcasts, if you could just pinch your fingers and drag through the air. Infinitely better than double-pinching the stem 15 times in a row.
Finer volume control. Different gestures for fast forward vs next track. Additional gestures e.g. for liking the current track.
Before, lines of code was (mis)used to try to measure individual developer productivity. And there was the collective realization that this fails, because good refactoring can reduce LoC, a better design may use less lines, etc.
But LoC never went away, for example, for estimating the overall level of complexity of a project. There's generally a valid distinction between an app that has 1K, 10K, 100K, or 1M lines of code.
Now, the author is describing LoC as a metric for determining the proportion of AI-generated code in a codebase. And just like estimating overall project complexity, there doesn't seem to be anything inherently problematic about this. It seems good to understand whether 5% or 50% of your code is written using AI, because that has gigantic implications for how the project is managed, particularly from a quality perspective.
Yes, as the author explains, if the AI code is more repetitive and needs refactoring, then the AI proportion will seem overly high in terms of how much functionality the AI proportion contributes. But at the same time, it's entirely accurate in terms of how this is possibly a larger surface for bugs, exploits, etc.
And when the author talks about big tech companies bragging about the high percentage of LoC being generated with AI... who cares? It's obviously just for press. I would assume (hope) that code review practices haven't changed inside of Microsoft or Google. The point is, I don't see these numbers as being "targets" in the way that LoC once were for individual developer productivity... there's more just a description of how useful these tools are becoming, and a vanity metric for companies signaling to investors that they're using new tools efficiently.
reply