Hacker Newsnew | past | comments | ask | show | jobs | submit | kmike84's commentslogin

> whereas the best engines average 99.something%?

To compute accuracy, you compare the moves which are made during the game with the best moves suggested by the engine. So, the engine will evaluate itself 100%, given its settings are the same during game and during evaluation.

You get 99.9something% when you evaluate one strong engine by using another strong engine (they're mostly aligned, but may disagree in small details), or when the engine configuration during the evaluation is different from the configuration used in a game (e.g. engine is given more time to think).


I think he/she is reacting mostly to this quote from the article, not to the main article topic:

> I have a good answer: my job is to double our value-add capacity over the next three years. Essentially, to double our output without increasing spending.

> You know what? With my XP plans and the XP coaches I’ve hired, it’s totally doable. I think I’m being kind of conservative, actually.

TBH, this part felt off to me as well.


The URL parsing in httpx is rfc3986, which is not the same as WHATWG URL living standard.

rfc3986 may reject URLs which browsers accept, or it can handle them in a different way. WHATWG URL living standard tries to put on paper the real browser behavior, so it's a much better standard if you need to parse URLs extracted from real-world web pages.


A great initiative!

We need a better URL parser in Scrapy, for similar reasons. Speed and WHATWG standard compliance (i.e. do the same as web browsers) are the main things.

It's possible to get closer to WHATWG behavior by using urllib and some hacks. This is what https://github.com/scrapy/w3lib does, which Scrapy currently uses. But it's still not quite compliant.

Also, surprisingly, on some crawls URL parsing can take CPU amounts similar to HTML parsing.

Ada / can_ada look very promising!


can_ada dev here. Scrapy is a fantastic project, we used it extensively at 360pi (now Numerator), making trillions of requests. Let me know if I can help :)


Exit Through the Gift Shop - an amusing documentary about somebody trying to find Banksy (a street artist), and much more, supposingly directed by Banksy himself.

There is some debate if it is documentary or not (the story is almost too good), but it seems the evidence suggests it is real.

EDIT: sorry, I missed the "last 4 years" part in the question. This film is older than that.


I still think about the final line from banksy often, paraphrasing here:

“I use to encourage everyone to make art. I don’t do that much anymore.”


Definitely this, my subconscious can't let go of the question as to whether this documentary is an exquisitely elaborate hoax or a rare capture of a "truth is stranger than fiction".

And if the goal was to create that confusion... meta-wow.


Masterpiece, one of my top 3 films about art, along with “Achilles and the Tortoise” and “Vincent and Theo”


The advice to use lru_cache is good.

But there is an issue if lru_cache is used on methods, like in the example given in the article:

1. When lru_cache is used on a method, `self` is used as a part of cache key. That's good, because there is a single cache for all instances, and using self as a part of the key allows not to share data between instances (it'd be incorrect in most cases).

2. But: because `self` is a part of a key, a reference to `self` is stored in the cache.

3. If there is a reference to Python object, it can't be deallocated. So, an instance can't be deallocated until the cache is deallocated (or the entry is expired) - if a lru_cache'd method is called at least once.

4. Cache itself is never deallocated (well, at least until the class is destroyed, probably at Python shutdown). So, instances are kept in memory, unless the cache is over the size limit, and all entries for this instance are purged.

I think there is a similar problem in the source code as well, e.g. https://github.com/Textualize/textual/blob/4d94df81e44b27fff... - a DirectoryTree instance won't be deallocated if its render_tree_label method is called, at least until new cache records push out all the references to this particular instance.

It may be important or not, depending on a situation, but it's good to be aware of this caveat. lru_cache is not a good fit for methods unfortunately.


This is the kind of thing that makes perfect sense but is not at all obvious to most people using Python, and IMO should be a caveat in the official manual. Great explanation.


Does @staticmethod run into this same issue?


No.


Not sure about the autofocus advice; I'm pretty happy with manual focus. It requires static camera placement, and fixed distance to the person, but isn't this happening anyways? Are people really walking around the room or moving camera between calls?

Manual means there are less failure modes - slow autofocus, autofocus trying to refocus, focusing on a wrong thing, etc.

It also means the hardware can be cheaper - camera doesn't need to have good autofocus (some old DSLR is fine), you can also use manual lenses.


Hm, I haven't noticed any increased latency when using a DSLR as a webcam.


As I understand, the drivers (webcam utility? not sure) are built for x86. For some reason they don't work in apps which are built for M1, so the camera only works if an app which needs a video is running in emulation mode.

So, if you want to use Canon DSLR on M1 in a web browser (e.g. google meet), get a browser built for x86.

I'm using Chromium, it can be downloaded for x86. The issue is that Chromium doesn't have screen share feature. So, for screen share, I'm using Chrome, and joining the call for the second time, in "companion mode". That's 2 separate browsers to participate in a call. Maybe there is a way to get Chrome or Firefox for x86, but I was a bit too lazy when setting it up :)


Haha thank you! That’s certainly an interesting approach.


Is it such a big issue? My Canon DSLR turns off every 30min, but that's only for a couple of seconds, it then turns back on. On a positive side, it's now easy to notice when 30min or 1hr meeting is running over, it's a nice reminder :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: