hnfilter sign in
134 points by joe_mwangi about 11 hours ago | 27 comments | [HN]
[hidden] — wood_spirit's reply was filtered, but the responses below were kept
wood_spirit about 11 hours ago q=0.62
This is interesting. Java desperately needs an array of struct for type safe sugar over high performance arenas, but the areas you’d turn to this would be in a zero allocation effort where the cost of the this library’s off-heap and the object allocation in the getters and setters etc largely negate the advantages for a lot of use cases.
[hidden] — joe_mwangi's reply was filtered, but the responses below were kept
joe_mwangi about 11 hours ago q=0.62
Yup. Totally agree. Java does needs an array of structs. Hopefully value classes will help out through flattened array. But in future, one can use value records with this library with probable zero cost allocation. But the library doesn't use any reflection calls for get and set hence high performance as a result, and using records helps a lot with escape analysis. Planning to do some serious benchmarks soon. Some preliminary tests shows it's similar to c code (example code in test package). Performance suffers if record fields are arrays due to heap allocation of arrays.
[hidden] — PaulHoule's reply was filtered, but the responses below were kept
PaulHoule about 10 hours ago q=0.62
The thing I coded where I felt the weight of the GC the most was a chess engine in Java that needed transposition tables. Like using regular HashMap(s) or anything similar it was too slow to really speed up the engine. If my son had stayed interested in chess I would have coded up an off-heap transposition tables but he switched to guitar which changed my side projects.
[hidden] — joe_mwangi's reply was filtered, but the responses below were kept
joe_mwangi about 10 hours ago q=0.58
Hope you come back. Would be cool to venture in this new data oriented programming phase java has invested a lot in.
[hidden] — traderj0e's reply was filtered, but the responses below were kept
traderj0e about 9 hours ago q=0.62
I'm glad they saw the light. Last time I used Java was in high school when it was version 7, when it was pure OOP. Didn't even have lambdas. After I learned other languages, I didn't want to use Java again, seemed like a lot of boilerplate for something that didn't even give good performance.
[hidden] — PaulHoule's reply was filtered, but the responses below were kept
PaulHoule about 7 hours ago q=0.58
I use Java all the time for ordinary programming at work, I think it is great, but I'm not in a hurry to mess with stuff off-heap.
[hidden] — fweimer's reply was filtered, but the responses below were kept
fweimer about 10 hours ago q=0.58
I doubt value classes will be helpful here because the array would have to be immutable. Context: https://openjdk.org/jeps/401
[hidden] — spockz's reply was filtered, but the responses below were kept
spockz about 9 hours ago q=0.62
Why does the array need to be immutable? Isn’t it enough to allocate the pessimistic max size of the record times the size of the array? In go slices work quite nicely to deal with “immutable” arrays and still be able to work on views on those arrays while keeping the same memory backing.
[hidden] — kbolino's reply was filtered, but the responses below were kept
kbolino about 7 hours ago q=0.62
There's two problems as I see it.

The first is that value types themselves are immutable. This affects code generation and optimization. If you were to modify the value with unmanaged code then you may not observe the modification properly from managed code. Maybe this restriction will get relaxed, but I don't see that on any roadmap any time soon.

The second problem is that value types are still nullable. The flattened array is not going to be identical to a Go slice or a C# Span etc. because it has to track the nullness of each element. It seems they don't want to nail down the exact storage format for that yet, possibly to change it in the future, and possibly because they want to add language-level control over nullability eventually too.

[hidden] — joe_mwangi's reply was filtered, but the responses below were kept
joe_mwangi about 10 hours ago q=0.58
Yeaaah. You might be right. Hopefully we have this one day https://openjdk.org/jeps/8261007
[hidden] — zmmmmm's reply was filtered, but the responses below were kept
zmmmmm about 8 hours ago q=0.62
I find it weird that the people steering Java have been seemingly willing to sit out the use case high performance computation while it has so dominated the computing landscape. They are just patiently incrementally iterating on all these JEP's that would support dramatically improved capabilities and make Java a very attractive platform for ML - but they keep fretting over minor interface adjustments, cycle after cycle. I get there is a philosophy of keeping the language stable and well designed, but this is really taking it to an extreme in the face of missing an entire segment of computing.
[hidden] — matt_heimer's reply was filtered, but the responses below were kept
matt_heimer about 10 hours ago q=0.62
What is the positioning for this and how does it work? A comparison to SBE might be nice.

I understand the issue about using Layout and MemorySegment being verbose but the reason I'm using those things it to develop high performance software that uses off-help memory and bypasses object allocation.

What does "map Java record types onto native memory" actually mean? Did you somehow turn a Java record into a flyweight or is `Point point = points.get(0);` just instantiating a record instance using data read from off-help memory? If it's a dynamic mapping library using reflection, that's cool but doesn't it kill the performance goals for most Java off heap usage?

Is this more of a off-heap to heap bridge for pulling data into the normal Java space when performance isn't critical?

[hidden] — joe_mwangi's reply was filtered, but the responses below were kept
joe_mwangi about 10 hours ago q=0.62
I use c-struct layout. I should be more explicit in the readme. I use classfile api to generate bytecode during initialisation of the Mem<T> and bytecode stored in cache in case if initialised again somewhere based on the same record type (I don't cache for records that are declared locally in a method). The class created from implementing Mem is a hidden class. So, basically, given a record, one can be able to analyse the layout based on record state description, and then for that Mem implementation (hidden class) we generate static final field varhandles + layout, segment is an instance field, and then generate bytecode the get and set to avoid reflection (actually, this is where most headache is in implementation). Go to the test package and see simple code for some adhoc rudimentary java (and native) files for benchmarks. Planning to test JMH benchmarks soon.
[hidden] — steve_barham's reply was filtered, but the responses below were kept
steve_barham about 10 hours ago q=0.62
I did something similar a few years back, with a slightly different approach to declaration, using interfaces to denote the layout of the struct. Mutation was opt-in by exposing setters using the (of the time) standard JavaBeans layout and an annotation processor took care of the codegen of an implementing class, which could be used where you wanted an on-heap box of an off-heap structure.

One benefit of this approach was that by using the interface as the type you could fairly easily support a flyweight pattern, reducing GC pressure when working with large off-heap collections. The parallels between stateless interfaces and offheap structs was also quite pleasing.

I'd love to see a similar effort using more modern techniques than Unsafe et al.

[hidden] — joe_mwangi's reply was filtered, but the responses below were kept
joe_mwangi about 10 hours ago q=0.62
Interesting approach. I think Project Babylon did the same thing https://github.com/openjdk/babylon/blob/code-reflection/hat/...

I had tested it and it's quite fast. Actually, you don't need to generate any bytecode on the fly. Problem is when you deal with array as fields, implementation becomes difficult. You can revisit if interested to come back to such an implementation one day.

[hidden] — jayd16's reply was filtered, but the responses below were kept
jayd16 about 10 hours ago q=0.58
At first glance it reminds me of C#'s Span<T>.
[hidden] — joe_mwangi's reply was filtered, but the responses below were kept
joe_mwangi about 10 hours ago q=0.19
Hahaha... inspired by it actually.
[hidden] — c-fe's reply was filtered, but the responses below were kept
c-fe about 10 hours ago q=0.58
This is very similar to SBE encoder/decoder flyweights over raw memory? What are the differences?
[hidden] — joe_mwangi's reply was filtered, but the responses below were kept
joe_mwangi about 9 hours ago q=0.62
I have not used SBE but looking at it, my understanding is that it starts from an explicit schema, typically XML, and generates encoder/decoder flyweights over a binary buffer. That gives much more control to the user in terms of field order (very important), sizes. Here, TypedMemory takes a different starting point in which the Java record shape is the schema, and the library derives the FFM MemoryLayout/accessors from that. I think the difference is schema/codegen/protocol orientation vs Java-type/FFM/in memorylayout orientation.
[hidden] — kosolam's reply was filtered, but the responses below were kept
kosolam about 11 hours ago q=0.19
Nice. Very clean api.
[hidden] — joe_mwangi's reply was filtered, but the responses below were kept
joe_mwangi about 11 hours ago q=0.58
Thanks. Main goal. Unions is where I decided to pause - no simple and ergonomic way to do it at the moment.
[hidden] — usernametaken29's reply was filtered, but the responses below were kept
usernametaken29 about 10 hours ago q=0.19
Why not use graal?
[hidden] — wwarner's reply was filtered, but the responses below were kept
wwarner about 9 hours ago q=0.19
Wouldn’t apache arrow serve the same purpose?