A few months ago I switched to gnome since it supports hidpi and fractional scaling, and it's snappy enough but somehow feels bloated in everyday use (ubuntu 20.04).
I tried to switch back to xfce with a recent fix for fractional scaling, but it looked fuzzy on my screen no matter the resolution, so I had to abandon that for the time being.
KDE was the default at my work and I've used it for many years, probably over a decade.
The article says gnome is lighter, but that's not my impression, or maybe I'm missing something with gnome?
Hihi, weal, francli de inglish languej culd rader be used as a sujestion for combaining multipl langazhes intu ei hosh-posh of sorts. It did lid as rader tu dis steit of afers, after ol. Hau abaut starting wiz pronouncin leters as dei show and no more funi interpreteisions and pronunsiasions.
Someone reading that might be wanting to ask the same hoping to hear salacious "hookers and blow" Wolf of Wall Street legendary stories, but usually the reality is pretty mundane. Corruption is endemic to highly successful niche adaptations, but their yields rarely give rise to outlandish behavior. We see that a lot in breathless news stories, but that's just what sells ads.
Most corruption happens in a matter-of-fact, business-as-usual, we're-all-in-this-together atmosphere. And the proceeds don't usually lead to blowout lifestyle extravaganzas, as most participants are clever enough to realize that draws unwanted scrutiny. That is what makes corruption so insidious and difficult to root out. The rot spreads slowly, and inspection is commonly asymmetrically more difficult and costly to mount than undertaking the corruption act itself.
The most reliable way of rooting out corruption I've seen is an organizational culture that nurtures trust between leadership and organizational members, equitable gains sharing (so members feel they have skin in the game), and fiercely protecting whistleblowers (where the vast majority of human organizations utterly fail). Personally, Dunbar's Number appears to be some kind of (hopefully) local optimum but I'm curious how Geoffrey West's findings of scaling square with corruption incidence and scale.
> use containers/virtualization for anything where ones hacking around might be dodgy - i.e. keep the work part of workstation in mind with all system updates/installations, etc.
Care to elaborate? this might be useful to try. I have a similar setup macbook and ubuntu system, but I find that the LTS 18/20 versions often need reboot, and I didn't have the issue with centos. Still, I would probably continue using ubuntu because it usually needs less hacking time in my experience.
> Still, the sheer scale of the problem is daunting. “Any reasonable search engine has to have 20 billion-50 billion pages in its active index,” Mr Ramaswamy said. When a user runs a query, the retrieval system must sift through vast troves of data then rank them in milliseconds.
this sounds interesting but as an outsider to these topics I have many questions.
how is the search achieved in milliseconds (hardware/software), for possibly millions of users simultaneously across the world
what is so difficult about creating an index of 20-50 billion entries? I'd imagine faang etc have resources to do these things with little effort
I own an JBL charge 2. These are probably aimed for people who prefer high bass over a balanced sound (balanced they are not). Worst- the bluetooth connection is unstable even in direct sight, within 5-20 cm, with android/mac/raspberry. I realize everyone has different tastes but I would not recommend JBL.
Interesting you had issues with bluetooth -- I use it as my car stereo (car is too old to have bluetooth), and often leave it running with the car shut and I can walk all the way from my garage to the house and still hear the music chugging along just fine.
As for the bass, I agree its a bit boomy, I use EQ at the source to compensate.
FTA: "... any wavelet can be reconstructed by adding together a finite series of identical wavelets squashed to fractions of its initial width: a half, then a fourth, then an eighth and so on."
I may not follow or there's a typo above. Should the first 'wavelet' be replaced with function/signal/distribution/etc?
this sounds a lot like a taylor or fourier analysis
Wavelets are basically a generalisation of Fourier with an arbitrary basis function which can be selected to highlight/match specific features in the data.
They've been used in image filtering for a long time. I remember a wavelet-based grain removal and denoising plugin for Photoshop from around 15 years ago.
In this work I suspect that trying to implement wavelets in an audio editor - which generates some useful but slightly quirky representations and editing options - cued the realisation they could be applied to stochastic PDEs.
PyTorch is increasing support. It had CI for a long time, has been buildable for all this year without too many caveats, and I would expect nightlies to be advertised sometime soon, maybe the next stable release will even also have it.
anyone have some experience on this? I'm in the market for an AMD laptop capable of running standard sci-kit/pytorch/etc but these seem optimized for NVIDIA cards. I'm curious about the outlook for these trending AMD cards.
Generally if you want to do ML tasks you want nVidia. They put the work in early to build the tooling so now the default assumption is that you're on nVidia hardware. It is possible to do some stuff on AMD cards, but you'll be on the cutting edge for that platform re-solving problems that were already solved on the nVidia side.
yes that was my conclusion after some research on this. there is an open issue on github for AMD support on pytorch, and looks like something works on arch linux, but really sounds like support is still in the hacking stage and far from production mode.
Support for PyTorch on ROCm is fairly good. I have built it from the dev branch without much trouble for all year. There has been ROCm CI even longer, my patch to print ROCm system information for bug reports was merged last week.
If you know where to look (i.e. it's public but unannounced) you can see that nightly wheels have been built for the last few days.
So I would expect that some time between now and the Developer Day in November we'll see ROCm appear on PyTorch's "get started" page.
I'm almost positive that if you're talking about AMD GPU's your going to be out of luck. For Deep Learning especially NVIDIA is really the only serious option.