Hacker Newsnew | past | comments | ask | show | jobs | submit | unsoundInput's commentslogin

I'm wondering why they're getting more and more adamant that people register an account, even though a lot of people don't care about having an online persona outside of Facebook and wouldn't ever contribute anything to the platform.

I honestly believe that could do much better by providing a good experience to people that just want to stay up-to-date and follow a few people that they find interesting.


Gotta pump up those numbers. Can't show growth without people signing up for accounts.


I don't buy it.

At least two core products of the biggest advertisement company in the world have no problem providing you value and generating ad-revenue without you having to be logged in. Google and YouTube would be nowhere near the success they are if they'd force you to log in.


Well, it's not a jvm class file but a dalvik Dex file. A disassembler which can't generate compilable code from this valid Bytecode is incomplete or in other words: buggy.


Not if you are talking about compilable java code, because as the article explains, there are things you can express in byte code which you can simply not express in java source code.


Indeed. The lower level language must be more expressive by the "definition". It is more difficult to write but allows fine grain control. This is the reason some optimization and obfuscation tricks are done in Assembler (native world, not Java). And hence disassembler simply can not re-translate it back.


There are two valid ways for a disassembler mitigate this: a) decompile to a language in which the bytecode can be expressed (in a concise / expresive manner, Java would always be a "possible" target because of turing completeness) or b) accommodate for the fact that there could be signature collisions in java, e.g. by prefixing/suffixing the method name


If you change the method name you end up with code that acts differently, just imagine something that does something like this pseudocode:

if (!new Exception().getStackTrace().getSha1sum().startsWith("0000")) alert("hello decompiler")

Your comment about java and turing completeness doesn't make sense unless you want the decompiler to basically output a java implementation of a JVM?


Dare [0] emulates the Dalvik VM's runtime behavior to generate verifiable (for the vast majority of cases) Java bytecode from dex bytecode.

[0]: http://siis.cse.psu.edu/dare/


Even though I agree that >>insert method of std::map is roughly 7 times slower than it should be<< is bad, these kind of problems are not too hard to find and solve if they are actually problematic for your software.

The most problematic performance issues I've come across were usually bad/premature optimizations that were not (correctly) validated against a simpler implementation as a performance baseline. Things like parallelism (multi-threading, web workers) or caching can absolutely tank performance if not done correctly. Plus they usually tend to make stuff more complex and bug-prone.


Funny, I tend to go the opposite way (immutable value types, stateful-as-needed behavior classes). I wonder how you argue that you have less stateful objects considering data objects are usually instantiated more than logic objects?


IMHO you overestimate the use of macs as development machines, doubly so in parts of our industry that works on projects where high performance GPU APIs are actually needed (games, cad, simulations)


I would be interested to know, where is such a place.

OSX support for OpenGL is several versions behind, with low performance and you definitely are not going to write AZDO OpenGL code there.


I started using sqlbrite (in combination with sqldelight) and stopped treating the database as an object store. Instead I create an interface for the data the customer needs (e.g. an item interface for elements in a listview), write a select query whose result fulfills this interface, and let the rest happen via databinding. This seems to work fine up to a couple of thousand of elements for me.


That is actually not true. Afaik multiversioning of dependencies is not actually supported by build systems without manual effort (not counting major versions in non-conflicting namespaces). Problem is really more in fat libraries and "unnecessary" (getters, setters) and synthetic (e.g. access to private methods from inner classes) methods that are usually not optimized away, especially in debug builds.


I've definitely included a library and then seen (otherlibrary).(library you included) as an option in autocomplete, fairly often. Happens a lot with utility-level things like okhttp.


I've seen this with okhttp as well.


I can't find the okhttp case you reference, but retrofit has a bunch of extra adapter artifacts that add an external library as a transient dependency to your project.

The code for the binding classes lives in retrofit.adapter.{guava|rxjava|…} [0] but the respective library still lives in its usual package. [1]

If that weren't the case you could A) not manually provide a minor version via your dependencies block in your build script and B) would have interop problems between libraries and between a library and your code as the same interface copied to a different packages is not the same interface for Java.

[0] https://github.com/square/retrofit/tree/master/retrofit-adap...

[1] https://github.com/square/retrofit/blob/master/retrofit-adap...


Kind of related: There is an outstanding request [0] on the Android issue tracker to offer an official support/compat library for Android's sqlite bindings, that would bridge Android's API (in a different java package) to a sqlite binary that you provide.

There are already implementations [1][2] of this, but because it is not provided by the Android team it is hardly supported by ORMs, libraries, etc.

[0] https://code.google.com/p/android/issues/detail?id=202658

[1] https://www.sqlite.org/android/doc/trunk/www/index.wiki

[2] https://github.com/requery/sqlite-android


Does this not just hit the "The Multiple SQLite Problem" described in the article?


I don't think so but I might also interpret the article wrong.

The problem - from what I understand - is that you used to be able to use the systems sqlite.so to access databases owned by your app from the native environment. With Android N you are no longer able to do this, you need to ship your own sqlite binary.

This can lead to problems when you want to access the same sqlite files from both the Android platform APIs and native (e.g. use it from native for business logic, debug it with a tool like Stetho [0] that uses android.database.sqlite) because of a version mismatch.

Would Google offer (a copy of) the Android API with the ability to plug in a different sqlite-compatible binaries, it would very likely find broad adoption. Then the problem stated by the article could be solved by shipping the app with a sqlite build that is used by both the native code and plugged into the compat library.

[0] https://facebook.github.io/stetho/


I imagine that the industry is under comparatively little scrutiny by regulators and the public, and can therefore be as fast-moving and rule-bending[0] as the top SV startups. This combined with less (at least perceived) prudence in Europe might give them the edge over their competition.

[0] https://news.ycombinator.com/item?id=12855554


Makes sense.

Even in areas where they do match up tech-wise Auto, Nokia, Airbus/Ariane etc maybe they need a Elon Musk type to do some ass kicking.


I think this talk makes his opinion on how breaking changes in libraries should be handled (in the context of the JVM ecosystem) very clear:

A) avoid breaking API changes

B) if you have to do large breaking API changes, that will probably affect alot of dependent projects, make it a new artifact / namespace / library that can live side-by-side with the old version

B is actually pretty common in large java library projects (rxjava -> rxjava 2, retrofit -> retrofit 2, dagger -> dagger ‡ all have independent maven artifacts and I think also packages) and imho this approach makes a lot of sense. It's also the more important part of this talk compared to his critique of semver.


Isn't semver the best way to do what he's advocating, then?

I mean, it's not like people delete 1.3.0 from their packaging system when they release 2.0.0. Incrementing the major version number is semver's way of declaring a new namespace, and once declared, it lives alongside the old ones.

What is Hickey suggesting be done differently?


It is about treating new versions with major API breakage as if they were completely new libraries, not as a "maybe, possibly drop-in replacement, iff we use just the right feature subset". E.g. RxJava changed their maven artifact ('io.reactivex:rxjava:1.y.z' -> 'io.reactivex.rxjava2:rxjava:2.y.z') and the package name where in which their code lives ('rx' -> 'io.reactivex'). This makes it possible for both versions to be used at the same time while transitioning, without breaking dependencies that rely on the older version and without having to bother with automatic multi-versioning in buildtools.

With that in place it is questionable what the actual advantage of semver vs a simpler monotonous versioning schema (like build number or UTC date) is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: