A receiver isn't a method. It's the "self" instance the method is invoked on. It's standard OOP terminology.
I'm with you regarding marshaling, but not because it's not an industry-standard term, it's just that Go misapplies it. Marshaling historically has referred to describing a remote procedure call and its return value; for example, you call "CreateFoo()" with a "CreateRequest", the latter must be marshaled into a payload that the remote call can read. For a network call this requires serializing the data to bytes using some kind of format, but for a local call it could just be a pointer to a data structure. However, historically people have often mixed the two terms. Python's standard library also misuses "marshal" to refer to serialization.
The history on marshaling goes back further than that. Smalltalk used the word more or less how it’s used in Go. There is definitely an RPC connotation these days though, probably because small talk was based on message passing.
Not true. Only the highest level researchers get paid a ludicrous amount. Most engineers at OpenAI in SF would get paid about 300k which can barely afford the taxes and rent in SF.
Is that really true? I saw some one-bedroom apartments there for $3500 -- expensive but even with that, 300K is a decent amoount. According to an online tax calculator, the take home pay is 185,000 or 15.4K per month.
Even if you spend a bit and spend another 3K per month on stuff like food, bills, and some spending money, that's still almost 9K that you can save per month. Sounds VERY lavish to me. So lavish that in 15 years of work you could save up 1.6 million, buy a cheap house somewhere (or already have one from buying it while working there) for $500K and have 1.1 million to live frugally on and just work on side projects for a little extra cash.
The strange thing about this retirement plan is that it assumes you will spend 15 years living somewhere, and then one day just walk away from all the social ties you've made during that period, during the prime of your life. I suppose there are people who could probably do this, but for most folks, this is not going to be a joyful path.
Wildly incorrect. $300k will let you live comfortably in SF, without roommates.
The fact that some people cannot live alone on that salary in SF does not mean most people cannot. I make considerably less than that and I could make it work. I wouldn't, because that would destroy my ability to save anything, but I could do it.
But not for the Bay Area. As of November 2024, the median price of a home in the Bay Area is around $1,316,500.
On a 3:1 debt to income ratio, I'd put median income for the area for those who can afford to buy a home there is approx $350K (80% financing on median home price).
I think the comparison to goto is a little extreme, but actually not too crazy. I could see a future where the accepted best practice is to use parallelized operations for everything, and only use sequential loops in cases where the underlying concept is inherently sequential (gradient descent, iterative hashing, etc.).
In certain computer science problems, a suboptimal action at time t may give rise to an optimal outcome at time >t.
Why wouldn't this be the case for research generally? Has our community really devolved to the point where things should only be noteworthy insofar as they optimize for SOTA for a given problem?
The "concise" option is not terribly concise in my experience. Here's an example:
> Q: "What is the difference between serializable and snapshot isolation?" (with concise enabled)
> A: "According to geeksforgeeks.org, both snapshot isolation and serializable isolation levels are considered the highest levels of transaction isolation in DBMS. These levels are used to avoid concurrency issues such as dirty read, phantom read, and non-repeatable read. Both isolation levels avoid these issues, but in different ways.
As stated in techcommunity.microsoft.com, both snapshot and serializable isolation levels provide a read-consistent view of the database to all transactions. In either of these isolation levels, a transaction can only read data that has been committed. Moreover, a transaction can read the same data multiple times without ever observing any concurrent transactions making changes to this data. The unexpected read committed and repeatable read results that are possible in other isolation levels are not possible in serializable or snapshot isolation levels.
In serializable isolation level, SQL Server acquires key range locks and holds them until the end of the transaction. A key range lock ensures that, once a transaction reads data, no other transaction can alter that data - not even to insert phantom rows - until the transaction holding the lock completes. In snapshot isolation level, SQL Server does not acquire any locks. Thus, it is possible for a concurrent transaction to modify data that a second transaction has already read. The second transaction simply does not observe the changes and continues to read an old copy of the data.
Serializable isolation level relies on pessimistic concurrency control. It guarantees consistency by assuming that two transactions might try to update the same data and uses locks to ensure that they do not but at a cost of reduced concurrency - one transaction must wait for the other to complete and two transactions can deadlock. Snapshot isolation level relies on optimistic concurrency control. It allows transactions to proceed without locks and with maximum concurrency, but may need to fail and rollback a transaction if two transactions attempt to modify the same data at the same time. According to techcommunity.microsoft.com, it is clear that there are differences in the level of concurrency that can be achieved and in the failures (deadlocks vs. update conflicts) that are possible with the serializable and snapshot isolation levels.
As explained in learn.microsoft.com, snapshot isolation must be enabled by setting the ALLOW_SNAPSHOT_ISOLATION ON database option before it is used in transactions. This activates the mechanism for storing row versions in the temporary database (tempdb). You must enable snapshot isolation in each database that uses it with the Transact-SQL ALTER DATABASE statement. This is different from traditional isolation levels of READ COMMITTED, REPEATABLE READ, SERIALIZABLE, and READ UNCOMMITTED, which require no configuration. The READ_COMMITTED_SNAPSHOT ON option allows access to versioned rows under the default READ COMMITTED isolation level. If the READ_COMMITTED_SNAPSHOT option is set to OFF, you must explicitly set the Snapshot isolation level for each session in order to access versioned rows.
In conclusion, both snapshot and serializable isolation levels are the highest levels of transaction isolation in DBMS, but they avoid concurrency issues in different ways. Serializable isolation level uses pessimistic concurrency control, while snapshot isolation level uses optimistic concurrency control. Serializable isolation level guarantees consistency, but it is achieved at the cost of reduced concurrency, while snapshot isolation level provides maximum concurrency but may need to fail and rollback a transaction if two transactions attempt to modify the same data at the same time. To use snapshot isolation, you must enable it with the Transact-SQL ALTER DATABASE statement."
When I feed this into ChatGPT 4 and ask for a more concise summary, I get this, which is closer to what I'd expect from Phind with the "concise" option enabled:
> "Snapshot and serializable isolation levels are the highest levels of transaction isolation in DBMS, preventing concurrency issues like dirty read, phantom read, and non-repeatable read. Serializable isolation relies on pessimistic concurrency control, using locks to ensure consistency, which may reduce concurrency and cause deadlocks. Snapshot isolation uses optimistic concurrency control, allowing maximum concurrency but may require rolling back transactions if conflicts arise. Snapshot isolation must be enabled with the Transact-SQL ALTER DATABASE statement before use."
Expert mode combined with Concise mode tends to work best. The default mode is more verbose and doesn't listen to the concise flag as closely as Expert mode. This is definitely something we're working on.
Going through this class now, and I can attest that it is in fact tough as hell. However, I already feel as if it will be the most rewarding learning experience in the entire program.
The vanishing gradient problem has to do with a type of computer program called an artificial neural network. In an artificial neural network, different parts of the program perform small calculations and communicate with each other. A 'gradient' is a type of calculation that helps all the parts to talk to each other.
The problem arises when the gradient gets weaker and weaker as it moves up the layer of the neural network. This means that the higher layers don't get as much information as the lower layers do. This means that it's harder for the computer to learn the right things, which makes it more difficult for it to do its job. The vanishing gradient problem is a common problem that can make it more difficult for an artificial neural network to work properly.