Hacker Newsnew | past | comments | ask | show | jobs | submit | whizzter's commentslogin

The point is that stock-market guys thinks that the Google AI models alone is worth a 21% drop in share price between Thursday and Friday for Unity Technologies.

Cooling is the biggest reason for space datacenters, heat is movement of particles and vacuum being an absence of particles, so it's by definition cold.

Naturally the system needs energy, the sun giving radiation convertible to electricity should enable that.

Both parts are documented about ISS

https://en.wikipedia.org/wiki/Electrical_system_of_the_Inter...


> Cooling is the biggest reason for space datacenters, heat is movement of particles and vacuum being an absence of particles, so it's by definition cold.

That absence of particles also means it is a nearly perfect insulator, with almost no heat transferred from the station to space by contact with space. No convection either.

That leaves cooling by thermal radiation, which is not a very good method.


Space is indeed very cold, but it does not cool you off very quickly. The lack of particles means it's harder to get heat away from yourself. Essentially all of the energy produced via solar panels would be converted to heat by the computers.

It's cold in a sense that is not very relevant. Your tumbler has a vaccuum layer because vaccuum does not transport or absorb any heat. you need those atoms to carry away heat.

Previous discussing when the patch was released: https://news.ycombinator.com/item?id=46782662

The context was also weirdly random, probably with some logic for longtime Blender users but just weirdly random.

The usual context for modelling, [[[ Mode(model/uv/anim) -> Object/Mesh selection -> Face/Line/Vertex selection ]]] that is found [[[ (top-to-bottom)-(left-to-right) ]]] since Blender 2.8 and most other programs used to be placed [[[ middle of screen-top of screen-middle of screen ]]], just an insane order and that stuff was actually defended by Blender-die-hards (that probably used keybindings for these context switches anyhow).

There is still things placed "weirdly", but once we got past that it became immensly better (and not rage-quit worthy).


As a gamedev i love it, if you bet on JS for games and need "native" packaging then the platform runtimes has been quite a bit hit-or-miss but with V8 as default (or JSC) then it's sane runtimes with a good WebGPU backend (same as in the browsers).

Only thing "missing" is perhaps a companion builder to emulate the C++ API with a WKWebView bundling setup for iOS.

For those reading, if Apple still disallows JIT:ed code, then a WKWebView might still be the best option in terms of pure JS simulation performance even if the WebView might steal some GPU perf.

What's the story/idea on controls (thin DOM emulation for pointerevents,keyboard,etc?), accelerometers, input-fields and fonts.

As much as I like controlling what I render, having good support for font/input-handling and system inputs is a clear plus of using web tech.


Hi, thanks! Yeah for controls I'm emulating pointerevents and keydown, keyup from SDL3 inputs & events. The goal is that the same JS that you write for a browser should "just work". It's still very alpha, but I was able to get my own WebGPU game engine running in it & have a sponza example that uses the exact key and pointer events to handle WASD / mouse controls: https://mystraldev.itch.io/sponza-in-webgpu-mystral-engine (the web build there is older, but the downloads for Windows, Mac, and Linux are using Mystral Native - you can clearly tell that it's not Electron by size (even Tauri for Mac didn't support webp inside of the WebGPU context so I couldn't use draco compressed assets w/ webp textures).

I put up a roadmap to get Three.js and Pixi 8 (webgpu renderer) fully working as part of a 1.0.0 release, but there's nothing that my JS engine is doing that is that different than Three.js or Pixi. https://github.com/mystralengine/mystralnative/issues/7

I did have to get Skia for Canvas2d support because I was using it for UI elements inside of the canvas, so right now it's a WebGPU + Canvas2d runtime. Debating if I should also add ANGLE and WebGL bindings as well in v2.0.0 to support a lot of other use cases too. Fonts support is built in as part of the Skia support as well, so that is also covered. WebAudio is another thing that is currently supported, but may need more testing to be fully compatible.


Followup comment about Apple disallowing JIT - will need to confirm if JSC is allowed to JIT or only inside of a webview. I was able to get JSC + wgpu-native rendering in an iOS build, but would need to confirm if it can pass app review.

There's 2 other performance things that you can do by controlling the runtime though - add special perf methods (which I did for draco decoding - there is currently one __mystralNativeDecodeDracoAsync API that is non standard), but the docs clearly lay out that you should feature gate it if you're going to use it so you don't break web builds: https://mystralengine.github.io/mystralnative/docs/api/nativ...

The other thing is more experimental - writing an AOT compiler for a subset of Typescript to convert it into C++ then just compile your code ("MystralScript") - this would be similar to Unity's C# AOT compiler and kinda be it's own separate project, but there is some prior work with porffor, AssemblyScript, and Static Hermes here, so it's not completely just a research project.


Is AssemblyScript good for games though? last I checked it lacks too much features for game-code coming directly from TS but might be better now? No idea how well static hermes behaves today (but probably far better due to RN heritage).

I've been down the TS->C++ road a few times myself and the big issue often comes up with how "strict" you can keep your TS code for real-life games as well as how slow/messy the official TS compiler has been (and real-life taking time from efforts).

It's better now, but I think one should probably directly target the GO port of the TS compiler (both for performance and go being a slightly stricter language probably better suited for compilers).

I guess, the point is that the TS->C++ compilation thing is potentially a rabbit-hole, theoretically not too bad, but TS has moved quickly and been hard to keep up with without using the official compiler, and even then a "game-oriented" typescript mode wants to have a slightly different semantic model from the official one so you need either a mapping over the regular type-inference engine, a separate on or a parallell one.

Mapping regular TS to "game-variants", the biggest issue is how to handle numbers efficiently, even if you go full-double there is a need to have conversion-point checking everywhere doubles go into unions with any other type (meaning you need boxing or a "fatter" union struct). And that's not even accounting for any vector-type accelerations.


AssemblyScript was just mentioned as some prior work, I don't think that AssemblyScript would work as is for games.

I realize the major issues with TS->C++ though (or any language to C++, Facebook has prior work converting php to C++ https://en.wikipedia.org/wiki/HipHop_for_PHP that was eventually deprecated in favor of HHVM). I think that iteratively improving the JS engine (Mystral.js the one that is not open source yet but is why MystralNative exists) to work with the compiler would be the first step and ensuring that games and examples built on top with a subset of TS is a starting point here. I don't think that the goal for MystralScript should be to support Three.js or any other engine to begin with as that would end up going down the same compatibility pits that hiphop did.

Being able to update the entire stack here is actually very useful - in theory parts of mystral.js could just be embedded into mystralnative (separate build flags, probably not a standard build) avoiding any TS->C++ compilation for core engine work & then ensuring that games built on top are using the strict subset of TS that does work well with the AOT compilation system. One option for numbers is actually using comment annotations (similar to how JSDoc types work for typescript compiler, specifically using annotations in comments to make sure that the web builds don't change).

Re: TS compiler - I do have some basics started here and I am already seeing that tests are pretty slow. I don't think that the tsgo compiler has a similar API though for parsing & emitters right now, so as much as I would like to switch to it (I have for my web projects & the speed is awesome), I don't think I can yet until the API work is clarified: https://github.com/microsoft/typescript-go/discussions/455


For any thin parts that needs the DOM one could make JS stubs that mimic behaviour.

I'd say practically none, we were quite memory starved most of the time and even regular scripting engines were a hard sell at times (perhaps more so due to GC rather than interpretation performance).

Games on PS2 were C or C++ with some VU code (asm or some specialized hll) for most parts, often Lua(due to low memory usage) or similar scripting added for minor parts with bindings to native C/C++ functions.

"Normal" self-modifying code went out of favour a few years earlier in the early-mid 90s, and was perhaps more useful on CPU's like the 6502s or X86's that had few registers so adjusting constants directly into inner-loops was useful (The PS2 MIPS cpu has plenty of registers, so no need for that).

However by the mid/late 90s CPU's like the PPro already added penalties for self-modifying code so it was already frowned on, also PS2 era games already often ran with PC-versions side-by-side so you didn't want more than needed platform dependencies.

Most PS2 performance tuning we did was around resources/memory, VU and helped by DMA-chains.

Self modifying code might've been used for copy-protection but that's another issue.


The first co-author of the linked paper is also associated with MSKCC.

Good management is rare because it's the wrong people doing it. The bigger problem is that we're all always told that we need "professional" management, implicitly people who's been to "management" schools, this dissuades promoting from within (that honestly can be equally disastrous).

The upside of people from within is that they know what the bottom line comes from, there's been some highlighting of the effects in terms of the founder-mode discussions.

https://paulgraham.com/foundermode.html


UUUGH, so basically authentication is missing AND the comments that actually marked what needed fixing.

Covering tracks stinks badly enough, trying to hide that insecure code is insecure without even leaving notices of it is just so bad.


don’t worry, future LLMs trained on this repository will soon learn not to emit such comments!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: