Hacker Newsnew | past | comments | ask | show | jobs | submit | icu's commentslogin

Wow, I remember this. My Dad won a Apple Newton MessagePad 100 and I loved using it as a kid to draw. While I appreciated the ability to write notes, I found having to enter letters one using the special character input was too much friction for regular use. You basically had to learn some of your ABCs when writing letter by letter. Sadly the device was not really used by either my Dad or I because it was too much hassle to relearn how to write.


Would really like to know what makes a person (or group of people) invest the time and energy to do this? Is there a group of hobbyist gamers who work on titles they love? Is it about digital conservation?


I've spent a lot of time reverse-engineering vintage synthesizer firmware (which is a bit simpler than modern games). I did complete end-to-end annotations of these two vintage synth ROMs:

- https://github.com/ajxs/yamaha_dx7_rom_disassembly

- https://github.com/ajxs/yamaha_dx9_rom_disassembly

It started because I was just curious about how these devices actually worked. In the end I learned a lot of really invaluable skills that really broadened my horizons as an engineer. I got a chance to talk to a handful of incredibly smart people too. The actual work can be a lot of fun. It's like piecing together a really large and technical jigsaw puzzle. In my case, it also led to me being able to release a fun firmware mod: https://github.com/ajxs/yamaha_dx97

In case anyone is curious about how I worked, I wrote a bit of a tutorial article: https://ajxs.me/blog/Introduction_to_Reverse-Engineering_Vin...

It can be a bit analogous to archaeology too. Even though in my case the DX7 is only 42 years old, that was an aeon ago in computing terms. You gain a bit of insight into how different engineers used to design and build things. Even though development for the N64 is fairly recent, from memory the console had some interesting constraints that made development tricky.


> the console had some interesting constraints that made development tricky

The ones that come to mind are the tiny 4KB texture cache, high memory latency (thanks Rambus), and inefficient RCP microcode. The N64 could have been so much more with a few architectural tweaks but developers liked the Playstation much better on account of its simplicity despite it being technically inferior in most respects.


>developers liked the Playstation much better on account of its simplicity despite it being technically inferior in most respects.

That statement is surprising, as being a kid I remember the PlayStation as obviously graphically superior. I’m not doubting you but what explains the difference between technical and user perception?


The N64's processor had triple the clock speed of the Playstation's on top of having more RAM (up to 8MB versus the 3MB of the Playstation). Its graphics subsystem could also do perspective-correct texture mapping and push more polygons per second. It also had a hardware FPU which the Playstation notably lacked. It's pretty widely acknowledged that the N64's Achilles heel was its small texture cache which caused developers to use lower-resolution textures with heavy anti-aliasing than they otherwise would. This results in the characteristic smeary look of N64 games versus the Playstation's wobbly, pixelated aesthetic. You probably thought the PS1 looked better because of the more detailed textures.

I've no doubt (as a thoroughly amateur video game historian) that with a few small tweaks Nintendo would have ate Sony's lunch that generation. In that alternate universe Sega would have had better developer support for the Saturn and done crazy stuff with their super-wacky architecture too but I digress...


That’s interesting! I thought the answer was going to be related with CD vs cartridge capacity.

It also sounds crazy that games like tekken 3 could run on 3mb RAM total, when just the current music track feels like it could take that much space.


Many PSX games had the music on the CD as standard audio, so they didn't require much of any effort from the console to play.


It's technically 2MB of RAM + 1MB of VRAM.


I guess you’ve never kicked ass and chewed bubble gum


It's hard to do when you're all out of gum


Maybe they just really love the game. This is a form of tribute.

I too have a beloved video game from my childhood: Mega Man Battle Network 2. That game changed my life. I learned English and became a programmer because of it. I have two physical copies of it in my collection, one of them factory sealed.

Sometimes I open the game in IDA and try to reverse engineer bits and pieces of it. I just want to understand the game. I don't have the time, the dedication or even the low level programming knowledge that these badass folks in the ROM hacking community have, but I still try it.


I'm the person who reimplemented Cosmo's Cosmic Adventure (DOS, 1992) and my original reasoning was a desire to know how it was able to do some of the graphical tricks it did on such underpowered hardware (it could run on an IBM AT). The game wasn't anything special by any metric, but it was an important piece of my childhood and I felt an attachment to it. I also learned a hell of a lot about the PC platform, the C ecosystem from the 80s, and my own tastes as an engineer.

https://github.com/smitelli/cosmore

https://cosmodoc.org/


I gave a talk at Game On Expo about decompiling Castlevania: Symphony of the Night (https://github.com/xeeynamo/sotn-decomp ) earlier this year and talked a little bit about exactly this. Almost everyone who works on loves the game. After that, motivation varies: some want to see ports, some want to mod, some want to learn everything they can, some want to preserve. Along with those I also like the challenge (not unlike sudoku).

Doing it long enough requires learning compiler history and theory, understanding business and engineering pressures of making the game, and occasionally reveals why parts of the game work the way they do.

I stream working on SotN and am happy to answer any questions in chat if you’re interested in learning more - https://m.twitch.tv/madeupofwires/home


You climb a mountain because it's there. Different people have different mountains.

It's an interesting challenge, you can improve it or make it do X,Y,Z, you can add speedrunning or competition gaming features, solving puzzles gives a sense of accomplishment, a certain small group gives you social clout, etc.


In addition to those categories, speedrunning glitch hunters tend to gravitate to participating in these projects as well. E.g. the Twilight Princess decomp was started primarily by and for the speedrunning community.


It's also the endgame for romhacking, once a game is fully decompiled modders can go far beyond what was feasible through prodding the original binary. That can mean much more complicated gameplay mods, but also porting the engine to run natively on modern platforms, removing framerate limits, and so on.


This is how the text adventure/interactive fiction community started. Some hackers reverse engineered the Infocom z-machine then built new languages and compilers so new games could be created.


Preservation and ease of modification. New console units are not being made anymore, and the number of old ones is limited, they can break, and there is an issue with output video formats that are incompatible with modern monitors/TVs. There is emulation, but it's not perfect and can be demanding. Decompilations enable people to create native binaries for different platforms. This makes playing the game easier and more accessible.


Same. Is there a project page or anything that explains the context, the reasons, the history behind this? I bet it would be very interesting.

The Readme is too technical and misses a writeup on the soul of the project: Section 1, title. Section 2, already talking about Ubuntu and dependencies. Where is section "Why?" :-) ?


Based off the commit history, this has been one person's on-off project for 3 years. My guess is that they like this game and they were curious about how decomps come to fruition - and what better way to find out than to do it?


There are people who spend hours and hours analyzing bit characters in things like Lord of the Rings (where did the Blue Wizards go? Who is Tom Bombadil?) or Star Wars. This is a similar fan obsession. Remember fan comes from fanatic.


Nostalgia: a sentimental longing or wistful affection for the past, typically for a period or place with happy personal associations.


This looks cool but that D-Pad is going to hurt after a few minutes of play.


The kind of person who does this has many gameboys to play on, this is for art and concept only I imagine


Yes, I had the first version and loved it. Back then searching for local files took forever. I was upset when it was discontinued. Even today on a high spec Windows 11 Pro machine search isn't as good as what it was with Google Desktop back then.


Try everything from voidtools, it's incredibly fast (both search and indexing).


Totally agree it's awesome! It's a great example of performance as a feature.


I'm really glad there was an effort by streamers and influencers (shout-out to Asmongold and Penguinz0) to back Ross up and push back on PirateSoftware's incorrect take on this initiative. For a while it looked like the UK wasn't going to get 100k votes and that the EU initiative wasn't going to hit the million mark. Then about a week ago content got uploaded and this initiative got a much needed boost.

The console wars are no longer company vs company, it is company vs consumer. So much anti consumer shenanigans are going on in the video game industry a message needs to be sent.

If you care about video games, even in passing as a complete casual, please sign the petitions. I've done my dash for the UK.


Giving Asmongold of all people a shout-out feels so wrong... Guy is a creep to say the least, but I guess even a broken clock is right twice a day.


Not disagreeing but do you have anything to back that up for those of us who aren't in the loop?


As a parent I worry about this technology being used on children. While one way of preventing this from happening is to limit photos of children on social media it's extremely difficult to maintain this once they hit high school/secondary school. That approach also doesn't stop someone taking source photos or video using their phone.


One of the most popular models on civit.ai today (huggingface for diffusion models but defacto basically a porn site) is the “age slider”.

It’s 10000% mostly used for creating this kind of horrifying content.


You can get Gemini 2.5 Pro to help you with infra and deploymnet, code quality and ownership, testing, CI/CD and automation, as well as Documentation... as long as you know to ask for it.

You also don't necessarily need all of that to hack together an MVP. I think a lot of people are not acknowledging that and they are negatively looking down on people embracing a new way of 'writing' code. Users don't care how you make a thing, they just want the thing to work.

Before ChatGPT made a breakthrough in LLMs, code was leverage. Now, LLMs are leverage. I think people suddenly finding that their leverage has been significantly eroded is the source of the negativity towards a "vibe coder".

So while anyone can write a book (the technology has existed since about 500 CE), few do, and there are fewer really good books. No matter the medium it's how you leverage the tool(s) you got.

I think this is a Prometheus moment, LLMs are giving coding to humanity, and it's getting adopted right now by people brave enough to try and embrace it even though 'software development' might be way outside of their comfort zone. I think it's worth cheering those people on even if they fail their way forward.


I think AI is just leverage for whatever you want to do. How you use that leverage, and what you apply it to is what matters. It's just another tool for humanity.


I don't really understand where this fits in the market? It's not as intelligent as the pack leaders and is about onpar with GPT-4o mini. When comparing it to GPT-4o mini further, while it is a bit faster, GPT-4o mini is a lot more cheaper [Source: https://artificialanalysis.ai/models/nova-pro/providers].

In terms of value for money, I would probably go with GPT-4o mini and not Nova Pro. Maybe Amazon feels that it needs to have their own offering to stay relevant?


My guess is it's also about enterprise agreements.

For many larger enterprises, governments, etc, the barrier to trying these things is to pay for them (new contracts, RFPs, etc).

But all of them already have enterprise agreements with MS, Amazon, etc. So there must be some class of customers to whom it's easy to just add this to their AWS bill.


This isn’t an AWS product, it’s Amazon (the non-AWS side). I don’t think this has anything to do with AWS billing.

AWS already has Amazon Q, which is its chatbot offering for AWS customers.



Amazon Nova is a foundation model created by Amazon and is offered as one of the models you can use in AWS Bedrock, so the model gets a marketing page for it. Note that Llama also has an AWS marketing page (as do other models), but that doesn’t make them AWS products: https://aws.amazon.com/bedrock/llama/

Amazon Nova Chat is a different product that uses Nova, buts it not AWS. Notice that if you try to use Nova Chat, you log in using your Amazon.com account and not an AWS account.


These business can easily Anthropic models through AWS Bedrock. All it requires it a simple clickthrough EULA. That's what we do at the F500 non-tech company where I work. The same is true with OpenAI models in Azure.

I can't imagine AWS is going to get much usage of these models... but you have start somewhere I guess.


This is 100% it


Where exactly is the Apple Intelligence that was advertised? Siri absolutely cannot go into your phone's calendar and see who you bumped into at some bar or café. I've been using the Pixel 9 Pro as my daily driver and while I really wanted to install CalyxOS on it, I've found Gemini to be actually useful (and I'm generally biased against Google).

Apple is behind the curve like Google was prior to Gemini 2.5 Pro, but unlike Google, I cannot see Apple having the talent to catch up unless they make some expensive acquisitions and even then they will still be behind. I was shocked at how good Gemini 2.5 Pro is. The cost and value for money difference is so big that I'm considering switching away from my API usage of Claude Sonnet 3.7 to Gemini 2.5 Pro.


Also, where is Apple Intelligence at all for any other language? I‘m from Germany and my phone is set to German. There is still no option to even enable it, although the phone was marketed all the same for Apple Intelligence.


They’re being sued because it is nowhere. They’re trying to do it on device with a tiny model. All the google Ai stuff is cloud based, run on their massive Google cloud data centres. Apple don’t have those, so getting more talent isn’t gonna fix it!


No. Siri has three parts.

On device AI model which handles light tasks like notifications. Apple Compute Cloud which runs a more powerful version of the same model and handles more complex operations. And integration with OpenAI, Gemini etc for LLM tasks.


In practice it rarely uses ChatGPT. And even when it does, it doesn’t work properly.

Eg. The other day I had a birthday invitation for a kids party sent to me as a jpg. I figure, I’ll ask Siri to add it to my calendar. After a lot of persuasion I manage to get it to send it to ChatGPT to process it (which makes no sense since even the photos app can read text in an image).

But anyway, it sends it to ChatGPT. Which replies by reading out the contents of the image including all the day and time and location. And that’s it. No action from Siri.

So I follow up with ‘so can you add it to my calendar now?’ Siri replies “what would you like to me add to your calendar?”

That was 5 minutes of my life I won’t get back…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: