Apparently with the Ryzen processor, the slots are limited: "The Framework Laptop 13 with Ryzen™ 7040 Series processors has two fully capable USB4 ports, with the back left and back right Expansion Cards slots. The front left Expansion Card slot can handle both USB 3.2 and DisplayPort Alt Mode, while the front right Expansion Card can use USB 3.2. This does mean there is one Expansion Card slot that can't support the HDMI or DisplayPort Expansion Cards, and most OS's will provide a warning if you forget. You can charge your Framework Laptop through any of the four Expansion Cards as well.".
The whole point of the slots is to be able to rearrange them. If such options are limited, they may as well be standard ports, which would use less space.
It basically negates a significant portion of the utility of the device.
Expect the unexpected. It doesn't have to be a rational known ask. We are Hackers. We want to be ready for whatever. That day when there's a 4 screen display wall? Ready. Connecting to a thunderbolt NAS, a display, a desktop, and our friends laptop? Ready.
"What is the use case" is the deadening soul sucking most un Hackerly thing to ask. Computers are amazing general purpose machines & we celebrate their flexible utility. We don't know where flexibility always leads but we value it. We value open systems, open possibilities, open frontiers, not being bounded.
Edit: based on the voting, I guess only some of us on HN are Hackers! I also have example use cases too, to be constructive, but guess that's not enough for the cynics.
A "hacker" is someone who makes something do what it wasn't intended to do. Plugging a high-end laptop into the correct number of monitors is not hacking.
Maybe stick to Intel chipsets if you must have 4 screens. Or maybe cover over the 4th port, and pretend you have 3 fully-functional ports?
That's a pretty good definition. It think there's more an element of being able to pull off the not forseen. A good code hacker isn't exactly defying computer architecture to extract their wins; they're finding brilliant solutions & cobbling together interesting systems that happen to meet the needs.
I think AMD needs the pressure. 4x Thunderbolt 4 is amazing. AMD offering 2x USB 4 is such a massive downgrade. Just a shake of the head & saying "go buy Intel" as if this is some natural expected unnoteworthy situation doesn't cut it for me.
IIRC, USB4 Full is pretty much TB3 plus some extensions... So probably shouldn't really be a deal breaker. You can still use a TB dock and an eGPU. USB4 can support 20-40gbps, TB3 is 16, and TB4 is 36.
Not quite. USB4 is based on TB3, but interoperability with Thunderbolt 3 is optional for hosts and devices. You could use any TB4 dock and USB4 dock that have USB4 upstream ports.
40gbps, 20 gbps (in 3.2 mode), PCI-E are all optional for host.
It's a mess, but I assume "fully capable USB4 ports" means that they are indeed fully capable. 2 "full" USB4 ports imo enough. I'm not plugging 4 monitors into my laptop directly, that's what the dock is for.
That's what I meant by "full" USB4... in that you get TB compatibility and higher speed... One third port on one side also has displayport mode over usb or can use an hdmi adapter on that port. Only 1 isn't capable of display out, and 2 are capable of high speed connections (external gpu or dock).
In the end, I do think the connectivity, while less than the Intel option, is probably enough for most people for most uses. I honestly don't use my personal laptop much and my work laptop is pretty much always docked to my TB3 dock at my work desk.
Any reason why you can't hook up that 4 screen display wall and that NAS at the same time by chaining DisplayPort and/or using a USB4 hub?
To me, being hackers means finding ways of circumventing limits, rather than expecting all kinds of connectivity to just be handed to you the easiest possible way.
> Edit: based on the voting, I guess only some of us on HN are Hackers! I also have example use cases too, to be constructive, but guess that's not enough for the cynics.
What's attracting downvotes is more likely to be gatekeeping language like this.
I wasn't being super fair nor nice, but I very much felt like I was trying to combat gatekeeping. I was advocating for open & possible, against someone who was trying to shut people down.
Intel has 160 Gbps of throughput available for connectivity. You might be able to physically get data to all the same devices with AMD, but at much reduced bandwidth, and the job wont be as easy. And you won't be able to use thunderbolt peripherals like external GPUs, if those are available.
USB4 definitely does not require PCIe transport. (it does however require DisplayPort.) Once you have usb4 though, yeah, adding PCIe shouldn't be that much more work-you've already done the hard stuff. Which is part of why I've felt so cheated, thinking AMD hasn't done PCIe in their usb4.
Agreed. Generally best used against systems & plans. Wanting to know how something ticks.
Using it as a cudgel against open frontiers happens too. I guess it's more a disposition of the person here. Personally I find that stark conservatism to be heinously ugly.
What is sometimes a very fine question, especially if someone is struggling to get to success. Recognizing the other cases though, where there are unknowns & where we want to have Postel'lian possibilities, where we want many small pieces we can loosely couple, where we aren't just trying to make it through right now but are trying to work for longer answers... I think the "What is the use case" is often a complete rejection of open thinking, is the mind closing.
It's such a pity AMD revolutionized bandwidth & interconnectivity on Epyc (128 lanes PCIe) , but has left consumers hanging. Threadripper HDET was 64 lanes (the Pro really was a full Epyc, 128 lanes, 8 channel memory) but now Threadripper is gone.
Meanwhile desktop & mobile have been so capped, at 20 - 24 lanes, right where we were.
The Thunderbolt 4 abilities of Intel chips is just stunning. 160Gbps of connectivity, usable for host to host connectivity... I wish AMD were trying to catch up here, wouldnt compromise their lead by being so far behind here.
but it's pretty common in any multi USB-C port laptop
and most times the intuitive port usage will anyway be the right one
Like most external devices don't benefit in a notivable degree from USB-4 over USB-3.2. The most common exceptions are expanders, docks, monitors with integrated docks, eGPUs and similar.
But most of the you would normally anyway prefer on the back ports to get the cables out of the way.
Where the front ports are more often used for input devices, usb disks and similar. But they very very often yield the same practical performance on USB-3.2 and USB-4 (not always, but _really_ often).
So yes for some people which want 4 USB-4 port or 2 USB-4 ports and a HDMI port and a charging port and are not okay with the charging cable being on the front right port.
But the huge majority of people will not care.
And there are limitations with what the different laptop CPUs can support so it's not that they could just have magically added additional USB-4 ports or give DP support to all ports.
One of the best ways I've managed latency with MySQL is basically this:
1) use persistent connections, let the OS handle them and tweak it to allow (both connecting server and mysql server). And never close the connection on the application side. (This could lead to potential deadlocks, but there are ways around it, like closing bad connections to clear thread info on mysql).
2) run the whole thing in a transaction, simply begin transaction or autocommit if allowed (same thing)
Doing so, when you are done rendering the content, flush it and send the correct signal to say nginx or apache to say it's done (like PHP's fastcgi_finish_request when working with FPM), and then run your commit. Obviously used when you can safely disregard failed inserts.
> 1) use persistent connections, let the OS handle them and tweak it to allow (both connecting server and mysql server). And never close the connection on the application side. (This could lead to potential deadlocks, but there are ways around it, like closing bad connections to clear thread info on mysql).
This is definitely ideal, but one thing that you can't entirely control is the server side or what's between. Sometimes your connections get interrupted, and it's not possible to maintain a connection forever. Yes tho, this is the ideal thing you should do with a connection pooler.
> 2) run the whole thing in a transaction, simply begin transaction or autocommit if allowed (same thing)
This shouldn't really help with latency. Being in a transaction doesn't reduce latency. If we're being pedantic, it would likely increase latency due to having to execute a BEGIN and COMMIT query, which is typically two more round trips, one per query.
I think what you're getting at is something like pipelining, where you can send multiple queries in one request, and get multiple results back. This is technically supported over the mysql protocol, but isn't very widely adopted in practice.
We're a service provider. As a client and customer, you connect to us as a third party service. You don't control our uptime or connectivity. Nor do you control whatever network hops may be between.
I currently live in Colombia and every once in a while, around 40-60% of packages drop. I noticed it was a peer connection in Miami, and almost all traffic went through there. Even if I went to an Argentinian IP, it would still go up north to Miami and then south (my guess is cheap residential peering connections). Anyways I basically got a server from a provider that bypasses this peer connection, and setup a VPN. When my internet starts misbihaving, I simply connect thru that VPN and it fixes the package loss. You adapt.
A bad sql query wouldn't do that. Look at their site, from the US, it calls cloud functions for a COP to USD conversion, on each place they render a currency (200+ requests just going to their homepage). I think it was built poorly, but that's just my opinion.
I agree with you. There are so many ways to make this infinitely more efficient. For starters, why are they re-calculating and re-rendering the value every time they get a new donation? Also, they could store those values in a more cold&cached-storage and just make the reads to update from Firestore.
Don't use a freaking database that is charging you for every single read and write, to deal with mundane client-side renderings.
I have about 9 years of professional experience, working as a fulltime freelancer for the last 3. Most of my experience has been in Internet Marketing. My last client hired me to build a platform for purchasing traffic from multiple sources (Adwords, Taboola, Gemini) and handling massive amounts of data to help auto optimize media buying. I've also dealt with random problems, like having to write Lua inside Nginx in order to do real-time bot detection alongside ZeroMQ for offsite processing at massive scale. I love to solve problems.
I tried so many different tools, mostly trying to avoid paying for a service, but honestly, there's nothing as good and as simple as getharvest.com. Simple time tracking, simple invoice generation and my clients can pay via a credit card (Stripe integration) right away. It's worth it.
ZeroMQ is amazing. A few years ago I built a prototype project for a client that basically was fail2ban but scalable. It monitored nginx logs and broadcasted some information, and workers did the rest. Most of the data was in memory and passed around, and communication was done via ZeroMQ. It was done this way so that we could split the heavy-load components off the server and into workers, and allow the server to simply do it's job: tail nginx logs and act upon ban requests from workers. It was amazing, sadly I never completed it and deployed it on a production environment but from initial tests, it outperformed fail2ban by a lot.
This is borderline a deal breaker for me.