Unfortunately, it doesn't look like there's much of a guarantee that value objects wouldn't result in heap allocations. https://openjdk.org/jeps/401 even contains:
> So while many small value classes can be flattened, classes that declare, say, 2 int fields or a double field, might have to be encoded as ordinary heap objects.
There's a further comment about potential of opting out of atomicity guarantees to not have that problem, but then there are more problems - looks like pre-JIT would still allocate, and who knows how consistent would JIT be about scalarization. IIRC there was also some mention somewhere about just forcing large enough value objects to always be heap allocations.
That’s some very selective quoting there. Let’s do the full thing:
> Heap flattening must maintain the integrity of objects. For example, the flattened data must be small enough to read and write atomically, or else it may become corrupted. On common platforms, "small enough" may mean as few as 64 bits, including the null flag. So while many small value classes can be flattened, classes that declare, say, 2 int fields or a double field, might have to be encoded as ordinary heap objects.
And maybe the end of the next paragraph is even more relevant:
> In the future, 128-bit flattened encodings should be possible on platforms that support atomic reads and writes of that size. And the Null-Restricted Value Types JEP will enable heap flattening for even larger value classes in use cases that are willing to opt out of atomicity guarantees.
Value types still require allocation for types larger than 128 bits if the value is either nullable or atomic — that seems like a reasonable trade-off to me.
Yeah, had edited my comment to expand a bit on that. Nevertheless, this is just one place where allocation of value objects may appear; and it looks like eliding allocations is still generally in the "optimization" category, rather than "guarantee".
> So while many small value classes can be flattened, classes that declare, say, 2 int fields or a double field, might have to be encoded as ordinary heap objects.
There's a further comment about potential of opting out of atomicity guarantees to not have that problem, but then there are more problems - looks like pre-JIT would still allocate, and who knows how consistent would JIT be about scalarization. IIRC there was also some mention somewhere about just forcing large enough value objects to always be heap allocations.