And have less rounds. I don't really understand why companies have multiple interview rounds. One of my friends recently gave three 1 hour rounds all technical.
If you're afraid of making the wrong decision then if one person makes 1/5 bad hiring decisions then 3 people make 3/5 bad hiring decisions not 1/125.
These systems are made to optimize not making bad hires, not necessarily missing out on good hires. Given that, multiple rounds does help. Per your example, if each interviewer has a 1/5 chance of mistakenly approving a bad hire, then three interviewers does in fact reduce that to a 1/125 chance of the company making a bad hire.
A very feasible option for the casual user as most software has moved to the web. Few years ago a casual user would have major concerns about office suites but now with office365 and the whole google docs/slides/sheets offerings a browser is all you need.
They really need to drive down the amount of computation needed. The dependence on Microsoft is because of the monstrous computation requirements that will require many paid users to break even.
Leaving the economic side even to make the tech 'greener' will be a challenge. OpenAI will win if they focus on making the models less compute intensive but it could be dangerous for them if they can't.
I guess the OP's brewing disruptor is some locally runnable Llama type model that does 80% of what ChatGPT does at a fraction of the cost.
Exactly. The biggest question is why you would trust the single authority controlling the AI to be responsible. If there are enough random variables the good and the bad sort of cancel each other out to reach a happy neutral. But if an authority goes rogue what are you gonna do?
Making it open is the only way AI fulfills a power to the people goal. Without open source and locally trainable models AI is just more power to the big-tech industry's authorities.
Draconian IT laws mean that in many countries you could be persecuted for threatening national security as a blanket rule with indefinitely the long harassment called investigation.
I'd argue it's even more valuable to learn to code without paper. Many of us formed habits of thinking with paper in school and freeze up a bit in front of the screen. I think it's better to work to eliminate that freezing up than to accommodate it.
I understand using paper for Diagrams and arquitechtual design, but for coding?
On college I remember having to code on paper and you simply can't test your code for typos or small error, even if your logic was perfect professor give you marks for a missing ';'
note: the class was evaluating the Logic / algorihtm not basic code syntax
Agreed. The faster you implement and correct the better it becomes. It's quite improbable that you'll catch a logical error in writing and not catch it while coding it up. It's also quite unlikely that a sketch of your plan will help you more than a debugger will.
I disagree, if you get into this habit, you risk missing the forest for the trees, which is much more dangerous : for instance that you should have used a different programming paradigm, a different programming language, a completely different approach at solving the problem than coding that, or even sometimes solving that problem might be an actual waste of effort, better spent elsewhere !
And not saying don't think just pointing out that careful handcrafted logic takes almost the same time to implement anyway.
Most problems are such that it's hard to know what really needs to be accounted for initially due to some janky interface or the language not behaving as you expected which can nuke the whole plan.
Writing stuff down can make you more confident about the approach and ironically lead to a dead end because you assumed everything was accounted for.
I have read a lot of papers that are vague with the details of implementation. Some release code that isn't portable at all and probably works only on their machine with 0 documentation.
A lot of obfuscation in general. Poor documentation and omission of critical implementation details means it's way too hard to replicate a paper.
And even if you do replicate there's no way of finding out if the output is wrong because of your implementation or the idea itself is flawed.
Doing proper documentation should be mandatory. I can only talk about CS but yeah more information.
I updated from High Sierra to Catalina and it broke a lot of things. Ultimately sold it in the used goods market and switched it for a new laptop. Installed Linux on it and haven't felt the need to go back to Mac since.
If you're afraid of making the wrong decision then if one person makes 1/5 bad hiring decisions then 3 people make 3/5 bad hiring decisions not 1/125.