I am not an IAM expert but maybe the app should have an admin login that sets the IAM user with full permissions on any s3 bucket(s) needed for the app to work.
There should be instructions on how to set that IAM user up (dont make it the root! It just needs full access to a single bucket ideally).
Basically you find a grad right now and make them do a coding test. Something is broken there.
A degree could include the vocational qualification as a 1 year study, but having the vocation qualification alone would save youngsters a lot of money and reduce the burden on hiring. You could even still interview coding questions but the application process can remove the spam/ai bullshit to some extent. "Can they code?" is answered.
More explicitly. In 2006, Apple asked Intel to make a SoC for their upcoming product... the iPhone.
At the time, Intel was one of the leading ARM SoC providers, their custom XScale ARM cores were faster than anything from ARM Inc themselves. It was the perfect line of chips for smartphones.
The MBA types at Intel ran some sales projects and decided that such a chip wasn't likely to be profitable. There was apparently debate within Intel, the engineering types wanted to develop the product line anyway, and others wanting to win good-will from Apple. But the MBA types won. Not only did they reject Apple's request for an iPhone SoC, but they immediately sold off their entire XScale division to marvel (who did nothing with it) so they wouldn't even be able to change their mind later even if they wanted.
With hindsight, I think we can safely say Intel's projections for iPhone sales were very wrong. They would have easily made their money back on just the sales from the first-gen iPhone, and Apple would probably gone back to intel for at least a few generations. Even if Apple dumped them, Intel would have a great product to sell to the rapidly market of Android smartphones in the early 2010s.
-----------
But I think it's actually far worse than just Intel missing out on the mobile market.
In 2008, Apple acquired P.A. Semi, and started work on their own custom ARM processors (and ARM SoCs). The ARM processors which Apple eventually used to replace Intel as suppler in laptops and desktops too.
Maybe Apple would have gone down that path anyway, but I really suspect Intel's reluctance to work with Apple to produce the chips Apple wanted (especially the iPhone chip) was a huge motivating factor that drove Apple down the path of developing their own CPUs.
Remember, this is 2006. Intel had only just switched to Intel in January because IBM had continually failed to deliver Apple the laptop-class powerpc chips they needed [1]. And while at that time, Intel had a good roadmap for laptop-class chips, it would have looked to Apple as if history was at risk of repeating itself, especially as they moved into the mobile market where low power consumption was even more important.
[1]TBH, IBM were failing to provide desktop-class CPUs too. But the laptop cpus were the more pressing issue. Fun fact: IBM actually tried to sell the PowerPC core they were developing for the xbox 360 and PS3 to Apple as a low-power laptop core. It was sold to Microsoft/Sony as a low-power core too, but if you look at the launch versions of both consoles, they run extremely hot, even when paired with comically large (for the era) cooling solutions.
> More explicitly. In 2006, Apple asked Intel to make a SoC for their upcoming product... the iPhone.
This isn’t strictly true. Tony Fadell and one of t- the creator of the iPod and considered co-creator of the iPhone - said in an interview with Ben Thompson (Stratechery) that Intel was never seriously in the running for iPhone chips.
Jobs wanted it. But the technical people at Apple pushed back.
Besides, especially in 2006 less than a year before the iPhone was introduced, chip decisions had already been made.
Was it really? x86 is more performance oriented and not efficiency oriented. Its variable length just makes it really hard to have a low power CPU that isn't too slow.
I think the impact of ISA is way overblown. The instruction decode pipeline is worse but doesn’t consume that many transistors in the end relative to the total size of the system. I think it has much more to do with the attitude of Intel defining the x86 market as desktop and servers and not focused on super low power parts; plus their monopoly which led to a long stagnation because they didn’t have to innovate as much.
You can see today with modern Ryzen laptop chips that aren’t that much worse than ARMs fabbed with the same node on perf/watt.
Innovate on what though? There was no market for performant very low power chips before the iPhone and then Android took off.
I am sure if IBM had more of a market than the minuscule Mac market for laptop class PPC chips back in 2005, they could have poured money into making that work.
Even today, I doubt it would be worth Apple’s money to design and manufacture its own M class desktop chips just for around 25 million Macs + iPads if they weren’t reusing a lot of the R&D
In 2010s, Intel pretty much sold the same Haswell design for more than half a decade and lipsticked the pig. It is not just low power that they missed. They had time to improve the performance/watt for server use, add core counts, do big-little, improve the iGPU, etc.
They just sat on it, their marketing dept made fancy boxes for high end CPUs and their HR department innovated DEI strategies.
Yes I’m sure that Intel fell behind because a for profit company was more concerned with hiring minorities than hiring the best employees they could find.
It’s amazing that the “take responsibility”, “pull yourself up by your bootstraps crowd” has now become the “we can’t get ahead because of minorities crowd”
Huh, it's not clear what you are suggesting. Who's "we" and who's not taking responsibility?
The best people were clearly not staying at Intel and they have been winning hard at AMD, Tesla, NVIDIA, Apple, Qualcomm, and TSMC, in case you have not been paying attention. They could not stop winning and getting ahead in the past 5-10 years, in fact. So much semiconductor innovation happened.
Yes, if you start promoting the wrong people, very quickly the best ones leave. No one likes to report to their stupid peer who just got promoted or the idiot they hire from the outside when there are more qualified people they could promote from within.
--
And re marketing boxes, just check out where Intel chose to innovate:
The problem with Intel weren’t the technical people. It started with the board laying off people, borrowing money to pay dividends to investors, bad strategy, not building relationships with customers who didn’t want to work with them for fabs, etc and then firing the CEO who had a strategy that they knew was going to take years fo implement
It wasn’t because of “DI&E” initiatives and a refusal to hire white people
For applications where the performance is determined by array operations, which can leverage AVX-512 instructions, an AMD Zen 5 core has better performance per area and per power than any ARM-based core, with the possible exception of the Fujitsu custom cores.
The Apple cores themselves do not have great performance for array operations, but when considering the CPU cores together with the shared SME/AMX accelerator, the aggregate might have a good performance per area and per power consumption, but that cannot be known with certainty, because Apple does not provide information usable for comparison purposes.
The comparison is easy only with the cores designed by Arm Holdings. For array operations, the best performance among the Arm-designed cores is obtained by Cortex-X4 a.k.a. Neoverse V3. Cortex-A720 and Cortex-A725 have half of the number of SIMD pipelines but more than half of the area, while Cortex-X925 has only 50% more SIMD pipelines but a double area. Intel's Skymont a.k.a. Darkmont have the same area and the same number of SIMD pipelines as Cortex-X4, so like Cortex-X4 they are also more efficient than the much bigger core Lion Cove, which is faster on average for non-optimized programs but it has the same maximum throughput for optimized programs.
When compared with Cortex-X4/Neoverse V3, a Zen 5 compact core has a throughput for array operations that can be up to double, while the area of a Zen 5 compact core is less than double the area of an Arm Cortex-X4. A high-clock frequency Zen 5 core has more than double the area of a Cortex-X4, but due to the high clock frequency it still has a better performance per area, even if it no longer has also a better performance per power consumption, like the Zen 5 compact cores.
So the advantage in ISA of Aarch64, which results in a simpler and smaller CPU core frontend, is not enough to ensure better performance per area and per power consumption when the backend, i.e. the execution units, does not have itself a good enough performance per area and per power consumption.
The area of Arm Cortex-X4 and of the very similar Intel Skymont core is about 1.7 square mm in a "3 nm" TSMC process (both including 1 MB of L2 cache memory). The area of a Zen 5 compact core in a "4 nm" TSMC process (with 1 MB of L2) is about 3 square mm (in Strix Point). The area of a Zen 5 compact core with full SIMD pipelines must be greater, but not by much, perhaps by 10%, and if it were done in the same "3 nm" process like Cortex-X4 and Skymont, the area would shrink , perhaps by 20% to 25% (depending on the fraction of the area occupied by SRAM). In any case there is little doubt that the area in the same fabrication process of a Zen 5 compact with full 512-bit SIMD pipelines would be less than 3.4 square mm (= double Cortex-X4), leading to a better performance per area and per power consumption than for either Cortex-X4 or Skymont (this considers only the maximum throughput for optimized programs, but for non-optimized programs the advantage could be even greater for Zen 5, which has a higher IPC on average).
Cores like Arm Cortex-X4/Neoverse V3 (also Intel Skymont/Darkmont) are optimal from the POV of performance per area and power consumption only for applications that are dominated by irregular integer and pointer operations, which cannot be accelerated using array operations (e.g. for the compilation of software projects). Until now, with the exception of the Fujitsu custom cores, which are inaccessible for most computer users, no Arm-based CPU core has been suitable for scientific/technical computing, because none has had enough performance per area and per power consumption, when performing array operations. For a given socket, both the total die area inside the package and the total power consumption are limited, so the performance per area and per power consumption of a CPU core determines the performance per socket that can be achieved.
Only up to a point. If one is abusing it, expect getting locked out. I buy enough stuff from Amazon that they don't mind me returning something once in a while.