Hacker Newsnew | past | comments | ask | show | jobs | submit | timtimmy's commentslogin

Careful, their ToS makes it clear they train on your Antigravity prompts (even on AI Ultra) and there is no opt-out that I can find.


Google keeps changing their privacy and “don’t train on my data/code” options. When gemini-cli launched, there was a clear toggle for “don’t train on my code.” That’s now gone; it just links to a generic privacy page for me. Maybe something with my account changed, I can't figure it out. Deep in the Cloud Gemini console, there’s another setting that might control training, but it’s not clear what products it actually covers.

Trying to pay for Gemini-3 is confusing. Maybe an AI Ultra personal subscription? I already pay for OpenAI and Anthropic’s pro/max plans and would happily pay Google too. But the only obvious option is a $250/month tier, and its documentation indicates Google can train on your code unless you find and enable the correct opt-out. If that opt-out exists in all the products, it’s not obvious where it lives or what products it applies to.

Workspace complicates it further. Google advertises that with business workspace accounts your data isn’t used for training. So, I was going to try Antigravity on our codebase. At this point I know I can't trust Google, so I read the ToS carefully. They train on your prompts and source code, and there doesn't appear to be a way to pay them and opt out right now. Be careful, paying for Google Workspace does not protect you, always read the ToS.

Be careful with AI-studio and your Google Workspace accounts. They train on your prompts unless you switch it to API mode.

The result is a lot of uncertainty. I genuinely have no idea how to pay Google for Gemini without risking my code being used for training. And if I do pay, I can’t tell whether they’ll train on my prompts anyway.

The marketing for their coding products does not clearly state when they do or do not train on your prompts and code.

I had to run deep research to understand the risks with using Gemini 3 for agentic work, and I still don't feel confident that I understand the risks. I might have said some incorrect things above, but I am just so confused. I feel like I have a <75% grasp on the situation.

I don't have a lot of trust. And honestly, this feels confusing and deceptive. One could easily confuse it as deliberate strategy to gather training data through ambiguity and dark patterns, it certainly looks like this could be Google's strategy to win the AI race. I assume this is just how it looks, and that they aren't being evil on purpose.

OpenAI in particular has my trust. They get it. They are carefully building the customer experience, they are product and customer driven from the top.


Personal antigravity hack: add a GPL license to every file, so google filters them before training to avoid legal complications. IANAL.


>OpenAI in particular has my trust.

I wouldn't trust Sam Altman. Or any of the big players really.


> trust

Hahaha...HAHAhaha. HAHAHHAHAHAHAHAHAHA!!!


A v2.0 update for my biology education app. I'm adding the ability to walk around cell models with billions of atoms on the Vision Pro.

I'm designing the content browser right now. I'm trying to achieve something really immersive like Apple's new Spatial Gallery app.


It's becoming quite common in S Korea for there to be a fixed tablet on a stand at each table. You order from the digital menu and then immediately pay with the integrated PoS. If there's no tablet you pay on your way out. There's a button at each table to summon the waiter. No tips.

Japan has ticket vending machines in many restaurants. You prepay and order at the front of the restaurant, it prints a little ticket, and you give that to the waiter or kitchen.


It's not beyond imagination that this will happen to the Canadian Prime Minister next.


I wonder what would happen if the Canadian Prime Minister then decided to file charges of treason against Musk, and issue an arrest warrant?


Nothing good I am afraid.


Optimizing our rendering algorithms for Apple Vision Pro. Trying to render a 300-million atom cell model at 90fps stereo. It's trivial on a 4090, it's pretty hard on a ~30W mobile GPU (W correct??). I'm thinking about a bunch of immersive mesoscale biology stuff next.


Curious—Do you have a best Metal project to learn from? Foveated rendering and temporal reprojection important here I imagine.


In my university days I climbed Mt Fuji at night for the sunrise in jeans, running shoes and a little tiny pen light. At the top there were people with small oxygen tanks (understandable, there are legitimate medical concerns for some). I'd do it differently today. :)

I'm ignoring the point of the article, but I'm currently in a country with a strong hiking culture. Everyone is decked out with every piece of hiking gear imaginable for a short trek up a hill (2 hour round trip?). It's a bit of a status thing. Well... maybe there is a connection to the article. Do we sometimes avoid simple tools in startups because of ego/status concerns?


I think you are right.

But a gentler answer is that if you don't know what you need to do the hike, you ask around for best practices and probably end up following some that are overengineered.

The people like you during your hike, in my experience, fall into two groups. Either they've had so much experience that they know exactly what works and what doesn't work for the conditions or they kind of got lucky.


Thank you for the gentler perspective, and I do think it plays a role.


"Current visualization software, such as UCSF ChimeraX6, can only render one or a few protein structures at the atomic level."

Lots of current visualization software is focused on visualizing a single protein structure (for example, ChimeraX). New visualizing and modeling systems are being developed to go up in scale to cellular scenes and even whole cells. For example, systems like le Muzic et al.'s CellView (2015) [1] are capable of rendering atomic resolution whole cell datasets like this in realtime: https://ccsb.scripps.edu/gallery/mycoplasma_model/

[1] : https://www.cg.tuwien.ac.at/research/publications/2015/cellV...


I cited cellVIEW in a parallel comment. ;)

I still think "few" is the wrong word. I usually think of "few" as meaning up to around 6, while Chimera and VMD can easily handle hundreds of proteins at the atomic level.


I just released a biology education app very much like the preprint for the Vision Pro launch (and soon for iPad/iPhone). I worked with David Goodsell's group to integrate their whole-cell bacteria model and David wrote the content. It looks like this: https://twitter.com/timd_ca/status/1753250624677007492 Our first bit of content is a tour through a 300 million atom bacteria cell for Apple Vision Pro (>60 fps, stereoscopic, atomic resolution).

We developed the tech for iPhone, iPad and AVP mobile GPUs (UE5 doesn't support this on the devices we're targeting). iPad: https://twitter.com/timd_ca/status/1592948101144547328

The linked preprint is beautiful, and I love the pipeline. I wonder if it's possible to export to other tools like Blender? The linked preprint is part of a pretty cool field of research into mesoscale modeling and visualization. For me these are a few of the standout papers, projects and works in the area (and there are many more):

- le Muzic et al. "Multi-Scale Rendering of Large Biomolecular Datasets" 2015 [1]

- - Ivan Viola's group helped pioneer large scale molecular visualization. This reference should be in the preprint, IMO.

- Maritan et al. "3D Whole Cell Model of a Mycoplasma Bacterium" [2]

- - This is out of David Goodsell's lab and the model I'm using.

- Stevens et al. "Molecular dynamics simulation of an entire cell" [3]

- Brady Johnston's Molecular Nodes addon for Blender [4]

- YASARA PetWorld [5]

[1] : https://www.cg.tuwien.ac.at/research/publications/2015/cellV...

[2] : https://ccsb.scripps.edu/gallery/mycoplasma_model/

[3] : https://twitter.com/JanAdStevens/status/1615693906137473030 and https://www.frontiersin.org/articles/10.3389/fchem.2023.1106...

[4] : https://bradyajohnston.github.io/MolecularNodes/

[5] : http://download.yasara.org/petworld/index.html


> The skills are not transferable

In the immediate race towards "AR (and VR) all the things" (Meta), game developers are poised to be in high demand. Maybe we'll see the insane salaries of ML engineers directed at game developers if the metaverse takes off.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: