Hacker Newsnew | past | comments | ask | show | jobs | submit | sbierwagen's commentslogin

Note for the confused: Ecombi achieves this by heating the bricks to dramatically higher temperatures using conventional resistive heating elements, thereby storing more energy, even though the specific heat capacity of any ceramic material is dramatically inferior to that of water.

But, as a result, Ecombi has a much lower system efficiency than a heat pump, since it's essentially just a space heater pointed at a rock. It only makes sense for jurisdictions with time-of-day variable pricing of electricity, and trades off simplicity and low initial purchase price for lifetime cost.


Thanks for the notes! I've seen them at other people's homes, but that's about the extent of my knowledge about them. (And I quickly googled a spec sheet to calculate a kJ/kg value.)

I suppose that efficiency whammy is worth it if you can use it to smooth out the duck curve. If power rates go negative then you'd be a fool not to run a space heater pointing at a rock!

>The drag coefficient was the headline: 12% better than our design target.

Is the drag much better than a regular cubesat? It doesn't look tremendously aerodynamic. From the description I was kind of expecting a design that minimized frontal area.

>Additional surface treatments will improve drag coefficient further.

Is surface drag that much of a contributor at orbital velocity?


Ultimately it's about the ballistic coefficient. You want high mass, low cross-sectional area, and low drag coefficient (Cd). With propulsion for station-keeping, it's challenging to capture the VLEO benefits with a regular cubesat. That said, there are VLEO architectures different than Clarity that make sense for other mission areas.

Yes it's a big contributor. The atmosphere in VLEO behaves as free molecular flow instead of a continuous fluid.


Cue the ultimate low orbit satellite

> It is undesirable to have a definition that will change with improving technology, so one might argue that the correct way to define space is to pick the lowest altitude at which any satellite can remain in orbit, and thus the lowest ballistic coefficent possible should be adopted - a ten-meter-diameter solid sphere of pure osmium, perhaps, which would have B of 8×10^−6 m^2/kg and an effective Karman line of z(-4) at the tropopause

from https://arxiv.org/abs/1807.07894


Assuming I did the math right such a satellite would only run $265 million USD for the materials (launch costs for an object of ~9k kg left as an exercise for the reader). That's far more affordable than I had expected. Amusing thought.

That would make a hell of a bang when it eventually deorbits.

Why no fold mirror then?


From the article:

>Evercoast deployed a 56 camera RGB-D array

Do you know which depth cameras they used?


We (Evercoast) used 56 RealSense D455s. Our software can run with any camera input, from depth cameras to machine vision to cinema REDs. But for this, RealSense did the job. The higher end the camera, the more expensive and time consuming everything is. We have a cloud platform to scale rendering, but it’s still overall more costly (time and money) to use high res. We’ve worked hard to make even low res data look awesome. And if you look at the aesthetic of the video (90s MTV), we didn’t need 4K/6K/8K renders.


You may have explained this elsewhere, but if not—-what kind of post processing did you do to upscale or refine the realsense video?

Can you add any interesting details on the benchmarking done against the RED camera rig?


This is a great question, would love some some feedback on this.

I assume they stuck with realsense for proper depth maps. However, those are both limited to a 6 meters range, and their depth imaging isn't able to resolve features smaller than their native resolution allows (gets worse after 3m too, as there is less and less parallax among other issues). I wonder how they approached that as well.



I was not involved in the capture process with Evercoast, but I may have heard somewhere they used RealSense cameras.

I recommend asking https://www.linkedin.com/in/benschwartzxr/ for accuracy.


Couldn’t you just use iphone pros for this? I developed an app specifically for photogrammetry capture using AR and the depth sensor as it seemed like a cheap alternative.

EDIT: I realize a phone is not on the same level as a red camera, but i just saw iphones as a massively cheaper option to alternatives in the field i worked in.


ASAP Rocky has a fervent fanbase who's been anticipating this album. So I'm assuming that whatever record label he's signed to gave him the budget.

And when I think back to another iconic hip hop (iconic that genre) video where they used practical effects and military helicopters chasing speedboats in the waters off of Santa Monica...I bet they had change to spear.


Is there any reason to think https://thebaffler.com/salvos/the-problem-with-music doesn't apply here?


A single camera only captures the side of the object facing the camera. Knowing how far away that camera facing side of a Rubik's Cube help if you were making educated guesses(novel view synthesis), but it won't solve the problem of actually photographing the backside.

There are usually six sides on a cube, which means you need minimum six iPhone around an object to capture all sides of it to be able to then freely move around it. You might as well seek open-source alternatives than relying on Apple surprise boxes for that.

In cases where your subject would be static, such as it being a building, then you can wave around a single iPhone for the same effect for a result comparable to more expensive rigs, of course.


The minimum is four RGB-only cameras (if you want RGB data) but adding lidar really helps.

The standard pipeline can infer a huge amount of data, and there are a few AI tools now for hallucinating missing geometry and backfaces based on context recognition, which can then be converted back into a splat for fast, smooth rendering.


I think it's because they already had proven capture hardware, harvest, and processing workflows.

But yes, you can easily use iPhones for this now.


Looks great by the way, i was wondering if there’s a file format for volumetric video captures


Some companies have a proprietary file format for compressed 4D Gaussian splatting. For example: https://www.gracia.ai and https://www.4dv.ai.

Check this project, for example: https://zju3dv.github.io/freetimegs/

Unfortunately, these formats are currently closed behind cloud processing so adoption is a rather low.

Before Gaussian splatting, textured mesh caches would be used for volumetric video (e.g. Alembic geometry).


https://developer.apple.com/av-foundation/

https://developer.apple.com/documentation/spatial/

Edit: As I'm digging, this seems to be focused on stereoscopic video as opposed to actual point clouds. It appears applications like cinematic mode use a monocular depth map, and their lidar outputs raw point cloud data.


A LIDAR point cloud from a single point of view is a mono-ocular depth map. Unless the LIDAR in question is like, using supernova level gamma rays or neutrino generators for the laser part to get density and albedo volumetric data for its whole distance range.

You just can't see the back of a thing by knowing the shape of the front side with current technologies.


Right! My terminology may be imprecise here, but I believe there is still an important distinction:

The depth map stored for image processing is image metadata, meaning it calculates one depth per pixel from a single position in space. Note that it doesn't have the ability to measure that many depth values, so it measures what it can using LIDAR and focus information and estimates the rest.

On the other hand, a point cloud is not image data. It isn't necessarily taken from a single position, in theory the device could be moved around to capture addition angles, and the result is a sparse point cloud of depth measurements. Also, raw point cloud data doesn't necessarily come tagged with point metadata such as color.

I also note that these distinctions start to vanish when dealing with video or using more than one capture device.


No, LIDAR data are necessarily taken from a single position. They are 3D, but literally single eyed. You can't tell from LIDAR data if you're looking at a half-cut apple or an intact one. This becomes obvious the moment you tried to rotate a LIDAR capture - it's just the skin. You need depth maps from all angles to reconstruct the complete skin.

So you have to have minimum two for front and back of a dancer. Actually, the seams are kind of dubious so let's say three 120 degrees apart. Well we need ones looking down as well as up for baggy clothing, so more like nine, 30 degrees apart vertically and 120 degrees horizontally, ...

and ^ this will go far down enough that installing few dozens of identical non-Apple cameras in a monstrous sci-fi cage starts making a lot more sense than an iPhone, for a video.


Recording pointclouds over time i guess i mean. I’m not going to pretend to understand video compression, but could it be possible to do the following movement aspect in 3d the same as 2d?


Why would they go for the cheapest option?


It was more the point that technology is much cheaper. The company i worked for had completely missed it while trying to develop in house solutions.


Kinect Azure


Ask one of the hundreds of vending machine companies in the NYC area where they put them, I suppose. https://www.google.com/maps/search/vending+machine/@40.69452...

I walked into a Fred Meyer yesterday and saw probably ten vending machines. The Redbox DVD rental machine outside, then capsule toy, Pokemon card and key duplication vending machines, filtered water and lottery ticket machines, Coinstar coin counting machine...


The only people who care about SSO are large enterprises. Coincidentally, large enterprises also are the only customers that make SAAS profitable. Every other plan is part of the sales funnel to the big enterprise contracts.


> The only people who care about SSO are large enterprises.

I can't tell if this is sarcasm or not. I'm going to assume that it is, in fact, sarcasm. Because this is definitely untrue in reality.


Is this meant as sarcasm?

I run a bunch of services for friends and family, things like Immich, wallabag, mealie etc. Less than 10 users, but do you expect me to crate and maintain separate accounts for each one for every service?

The SSO tax is stupid. If your whole business model is based on putting SSO behind a paywall, it’s a sign of a broken business model.


The author has got to be taking the piss with that page background. Constantly moving blobs, really?


Seems like a foreshock of AGI if the average human is no longer good enough to give feedback directly and the nets instead have to do recursive self improvement themselves.


No we're just really vain and like models that suck up to us more than those that disagree even if the model is correct and the user is wrong. People also prefer confident, well formatted wrong responses to basic correct ones, cause we have great narrow knowledge in our field but know basically nothing outside of it so we can't gauge correctness of arbitrary topics.

OpenAI letting RLHF go wild with direct feedback is the reason for the sycophancy and emoji-bullet point pandemic that's infected most models that use GPTs as a source of synthetic data. It's why "you're absolutely right" is the default response to any disagreement.


Double time at 14 hours? IBEW local 46 starts double time after 10: https://ibew46.com/media/7641/071724iwtentativelyagreedto.pd...


Electrical work is physical work, to point out just one obvious difference.


O365 raising the price to $40 a month ten years from now didn't quite land. Microsoft 365 E5 is $57 a month right now! $100 or $1000 a month makes the joke clearer.


A couple thousand of them in the US https://carta.com/data/spv-spotlight-q3-2024/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: