> It really comes down to a choice between a machine-focused (0) or human-focused (1) approach.
Just think of 0-based as offset-based and 1-based as index-based. Both are intuitive just like that. I never get why people arguing over this bring pointers and memory (or anything computer related) to the table. No normal person is going to understand that, but everyone understands that if you don't move at all (0 offset) you stay at the first (index 1) item. Add one and ... you get the point.
"Offset-based" is bringing in pointers; that's the thing it's an offset "from". "The beginning of the array" is just a pointer.
I suppose saying that does have an advantage over explicitly talking about pointers, in that the word "pointer" is a piece of jargon that has a lot of baggage. That's just avoiding jargon, though, not really using a different model.
No, offset means "measuring from origin", pointers use that language they don't provide it.
The advantages in measuring from origin can accrue to the person choosing to do it, because there are other reasons to do so which aren't satisfying the CPU.
No, offset means "measuring from a defined location". In a computer, the choice of location is more or less arbitrary[0]. If you pick the first element, you get zero indexing. If you pick a space before the first element, you get one indexing. Either one is a pointer, since a pointer is just computer jargon for "a defined location".
Calling that defined location "origin" doesn't suddenly make it not a pointer.
[0]- NUMA and cache effects aside, as they don't matter for this purpose
Offset-based lets you refer to an abstract location that is after the last element without resorting to n + 1.
[ ] [ ] [ ] ... [ ]
0 1 2 3 n-1 n
We can regard this n as a "virtual zero", and then make it possible to index the n - 1 element also indexable as just -1. The index -n then aliases to 0.
In Google standard SQL, to access an array it is simply not allowed to put a number inside square brackets. You must specify which way you mean. So `SELECT some_numbers[OFFSET(1)], some_numbers[ORDINAL(1)]` is allowed but not `SELECT some_numbers[1]`.
that's actually good verbosity, because the intent is very clear using this method, and multiple people can read the code and unambiguously identify the intent without having to talk to the original author.
This type of verbosity makes sense in a big organization where the left hand doesn't talk much to the right hand much, so communication naturally evolves to happen at the code level.
I like for example using enum types to ensure that what’s actually passed is the expected value even though fundamentally the semantics do not need to be much more complicated than integer values. There could be the same thing with a distinction between offset and indices as two different integer numerical types to avoid any ambiguity.
But I think that still leaves open the question of "offset from what?"
If you move zero from the first element, you're still at the first element... but if you move zero from the fourth element you're still at the fourth element. If you move one from before the first element, you're at the first element.
I think you're more or less right, but I think we still need something to motivate the first element as the point of reference, and the machine focus is one way to do that.
In some languages/environments, the array is literally the same as the pointer to the first element.
Other languages are more sophisticated (like humans are!)
and can say that position 0 does not exist. An array of size 0 has no elements. An array of size 5 has elements 1at through 5th. An array of size N has 1st through Nth, inclusive.
"ordinal" is preferred to "index", as "index" doesn't naturally suggest starting at "1", or even restricting the key to integers. "Ordinal" starts from 1, everyone agrees (except mathematicians, of course :-) ).
Do people not study grammar in American schools? I thought all kids learn the distinction between cardinal (one, two, three) and ordinal (first, second, third) numbers.
At the age that cardinal and ordinal numbers are taught, kids simply don't remember "cardinal" and "ordinal", they remember "one, two, three" and "first, second, third".
99% of the instances I've seen "ordinal" outside of this thread has been in code/documentation. It is not a common word in everyday language.
The English grammar that Americans study in school is likely somewhat different than the English grammar that is taught outside of America as the goal of the latter is likely focused on helping students map their native language onto English. I would agree that `ordinal` is uncommon word for many Americans, but it wouldn't at all surprise me if there were languages where the equivalent word was far more common, and therefore its use and translation was a part of standard English as a second language curriculum.
> The English grammar that Americans study in school is likely somewhat different than the English grammar that is taught outside of America as the goal of the latter is likely focused on helping students map their native language onto English
English is spoken as a native language in many countries outside of America.
I disagree. The CS definition of array, sure, but if you were to say “we have an array of options to eat”, most people with a high school education would know what you mean.
And then the counting niceties come along: those perennial +1 mistakes are caused by the fact that, e.g. 4 and 5 are two numbers but they are only one apart.
Just think of 0-based as offset-based and 1-based as index-based. Both are intuitive just like that. I never get why people arguing over this bring pointers and memory (or anything computer related) to the table. No normal person is going to understand that, but everyone understands that if you don't move at all (0 offset) you stay at the first (index 1) item. Add one and ... you get the point.