Assembly is a type of programming that is unlike most programming languages, and as such, would be a really tough introduction to the field.
Can you explain a bit more about how you became interested in assembly and programming the Apple II? And specifically, why you want to start the journey towards programming there?
The original magic system from Final Fantasy 1 (you can only cast N spells per magic level, maximum of 9) is a copy of the system from Wizardry. The only difference you have to type the spell name to cast it in Wizardry, and you get a menu in Final Fantasy.
The FF1 spell list is more directly derived from D&D, though. It even has the Power Words. The sense I've gotten from reading about the development of FF is that the dev team had varied experience with existing RPGs, so the game ended up as a mixture of derivative and original ideas. (The face-to-face side view in combat was apparently inspired by football!)
Freshpaint is hiring! We're a YC- and Intel-backed company SaaS platform that provides a codeless customer data platform that is default-safe for companies that need to work with sensitive data, like protected health information.
I like the idea of deliberate learning! I also really like the idea of focusing on how to maximize the amount of learning you do, and choosing when to strategically focus on that. So this is really cool general advice, and honestly the feedback that is outlined really could apply to any raiding guild, whether casual or not. Or also just, you know, people. Any people. It's good advice.
I had to set aside the framing device of a raiding guild, though, because the way it's used isn't realistic. It seems to me there's an assumption that learning is happening in a closed system, where learning can only happen through experimentation. That's not how MMORPGs work--people share experiences all the time, both inside and outside the game. People also raid outside of their own guild, which means that raid attempts can include people who've done this before, and can essentially help your guild by talking you through how it's done in-game.
So, in the general case, any attempt at learning within some reasonably closed system in which no learning can happen outside of experimentation, and when experiments are limited in quantity, certainly _must_ try to maximize what you can learn for every experiment. But casual raiding guilds do not fall into that category.
A lot of advice I've seen (such as this) postdates much of the standard library implementation by quite a bit. I could not, however, suggest that the current design for crypt is good or bad, for I have not used it often enough to offer a worthy opinion.
People seem to really click with the net/http design, and that does seem well done, but I imagine the standard library itself is a bit unevenly architected.
That's true, but the standard library also has strictures about backward compatibility that have ossified some bad choices. The crypto examples you give are arguably exactly this; even the later ones are probably a mistake, but are at least consistent with what can't be changed.
context.Context is an interface because it is known there are multiple implementers (several in the context package itself). The sql driver.Value is actually the other way around: the sql package is the consuming package (for the driver).
Abstractions are not a bad thing, but early abstraction often lacks understanding of the problem to solve. Abstraction is complexity. We often need complexity, but we should not add complexity without cause.
And I don't think this is a Java vs. Go thing. Let the program, or the problem if you will, prove to you where you need abstraction.
To be fair, that Zed Shaw essay isn't particularly good.
1. He uses lay (i.e. non-technical) definitions of "indirect" and "abstract" to argue that the technical definitions are somehow flawed. Words have different meanings in different contexts, but he rejects those different contexts.
2. He argues that abstract classes in Java are actually a form of indirection because the abstract modifier "creates an indirect path to the real implementation". This is after he covers that "abstract" can mean "a concept or idea not associated with any specific instance".
3. He claims that no lay definition of abstraction permits the idea of "swapping out the implementation". He fails to understand that, from the client code's point of view, "swapping out the implementation" is a direct consequence of coding to the abstraction.
4. In the words of Rich Hickey, I think Zed is conflating Simple and Easy.
I don't doubt that Zed saw something legitimately objectionable and that drove him to write the essay. But in trying to formulate his point, I think he ended up conflating ideas, relying on false premises, and built an argument that didn't have much of a logical progression.
He's essentially saying "I'm right and everybody else is wrong because I looked it up in a dictionary."
It would be neat if MachineBox could sense whether log noise would be useful in other contexts--e.g., as a metric that can be graphed. Or whether your logging is lacking something that might be useful, or just lacking signal at all (hey, user, your logs are just noise!).
The main issue is that there's already a lot of standard library support for classical strings--things that are just a pointer to a chunk of bytes, and no more; rewriting those functions would be destructive. The C standards group could add to the standard library, I suppose, but generally they are a bit reticent to do so.
It is not the case that C has a benevolent dictator with a particular vision and drive for the language. It's more like a small group of people who don't want to mess things up. And so not much changes. This is neither good nor bad to me--it simply is; and anyway, these days, you have quite a few options that do offer automatic strings that can also compile into machine code (Go, Rust I suppose?).
At the point when you have to convert between string types and can no longer easily use much of libc without conversions, why not just use a different language that's already gone much farther in the name of safety, usability or other features?
In other words, if your pseudo-c candidate already suffers from most the interop problems that Rust, D, Nim and Go do, why not just use one of those and at least reap the other benefits they provide?
I mean, I guess it's the best thing if you want to wreak havoc and chaos? Which, perhaps, you do!
Your suggested dichotomy is, of course, a little bit false. But I'm sure you knew that when you wrote it down.
It's entirely possible to write secure programs in C, even with standard functions. Writing your own code does not somehow confer a level of security-consciousness that you lacked when sticking to strings.h. (It does give you a wonderful opportunity to write your own security holes that no one has discovered yet!)
I mentioned this somewhere else, but we're in a pretty good place right now with languages; we finally have really solid alternatives to C that can compile to machine code, in both Go and Rust.
> It's entirely possible to write secure programs in C, even with standard functions.
You realize that most standard string functions are outright banned by organizations that care about security. As in you're not allowed to use them not even if you pinky swear to be 'careful'
I highly recommend the Static Analyzer that clang already uses. I use this frequently in the C project I am currently working on, and it's very good at spotting memory leaks and other potential issues. I've had more than one occasion where I've looked at an issue it's reported and said, nah, this is a false positive...oh, no, wait.
My understanding is that in the 6502 (NMOS) chip, those other opcodes had undefined behavior, and you might crash if you executed one. So you'd need to know who was the manufacturer to be certain that opcode X was available.
In the CMOS version (65C02), illegal opcodes are treated similarly to NOPs--they don't crash, and don't do anything other than spend cycles--with the caveat that certain of these "illegal" opcodes have specific numbers of cycles and bytes that they consume which are different from other illegal opcodes. More confusingly, there is an actual proper NOP instruction (0xEA), which consumes the one opcode byte and two cycles, which exists in both the CMOS and NMOS version of the chip.
So some of the illegal opcodes are just like NOP, and eat one byte and two cycles; some eat two bytes, and 3 or more cycles; and there's at least one which not only eats 3 bytes, but eight (GASP!) cycles.
Source: if it's possible for one to consider MOS emulation a hobby, then let's say it's a hobby of mine.
I copied the 6510 illegal opcode behaviour for the SY6502A emulator for my BBC Micro, and I've yet to see a problem from it. If they didn't all have the same PLA, then perhaps the differences didn't cause any useful effects...
(Some BBC Micro games definitely do use illegal opcodes, but I didn't take very careful notes when I was writing ver 1, rather a long time ago. For the current version, I just made sure the Lorenz 6502 test suite ran to completion.)
Yep. And some implementations -- like the Rockwell 65C02, or the later WDC 65C816 -- added new instructions using the previously undefined opcodes, so it wasn't even safe to assume they were all no-ops.
Can that be true? (I'm channelling 1985 here, so I could be well off, but...)
IIRC, some of the janky stuff I did on the Apple //e 6502 did run on the Apple IIc 65C02. I'm pretty sure they didn't fall of into NOP-land. (Again, I could be mis-representing this, but I was taking some shortcuts at one point to accelerate some graphics that ended up in a Broderbund game, so I don't think I am? (Time wastes all memories - I could well be wrong))
Ah-ha, well! There are actually two Apple IIe models. (Well, there was a third "platinum" one too, but let's not talk about that.) One would have shown "Apple ][" at boot time, and this was the original model with a 6502 NMOS-based chip. But they later released an "enhanced" model, which had a 65C02 CMOS chip. You could tell the difference because the later model would have instead shown "Apple //e" when booting.
It's quite possible (perhaps because you typed it as Apple //e!) that you were actually working with that 65C02 variant, and thus, your janky stuff was forgiven by the Gods of Early Personal Computing (blessed be).
I did not know that! I think it said //e on the case and ][ when booting, but I'll have go dig it out of storage. (You'd think that after spending thousands of hours in front of the dang thing, that I'd remember...)
Can you explain a bit more about how you became interested in assembly and programming the Apple II? And specifically, why you want to start the journey towards programming there?
reply