> But that doesn't mean that there is any remotely moral case for undoing the green revolution and allowing billions to starve.
A question is, is it possible to advance technology to fulfill the green revolution without changing the value of human creativity due to the creation/advancement of genAI? Or past a certain point, the results of discovering improved health and ecological outcomes will become inextricably linked with discovering new technologies that cause conflict? What actually drives such a process?
I think more people might become interested on why we end up here talking about new possibilities conflicting with stability again and again, similar to how the negative effects of the invention of smartphones are being talked about now.
> A question is, is it possible to advance technology to fulfill the green revolution without changing the value of human creativity due to the creation/advancement of genAI?
I have to admit not quite sure what you mean, and I do admit full guilt in starting us down the path of "mixed analogies" :). I'll try my best, though.
> Or past a certain point, the results of discovering improved health and ecological outcomes will become inextricably linked with discovering new technologies that cause conflict? What actually drives such a process?
I do think with respect to life-sustaining things -- medicine, pharma, food, shelter, water, energy -- that a combination of specialization and automation is necessary to increase the collective standard of living, and that labor alienation stems from a combination of specialization and automation.
Where I struggle is coming up with an affirmative argument that an artist should benefit from automation of medicine or farming, but that an alienated lab tech or food factory worker should not benefit from automated art.
Another way to look at this is: the less you pay for art-as-entertainment, the more resources you have to buy free time to produce your soul-work (whatever that may mean to you).
> Where I struggle is coming up with an affirmative argument that an artist should benefit from automation of medicine or farming, but that an alienated lab tech or food factory worker should not benefit from automated art.
> Another way to look at this is: the less you pay for art-as-entertainment, the more resources you have to buy free time to produce your soul-work (whatever that may mean to you).
Ah, yes. The alienated workers of the world will warm their weary souls at the hearth of derivative algorithmic creativity units. The reduced price and efficient delivery of each drone's creativity units will obviously give them more free time.
Perhaps we can even come up with a pill that'll let the drones feel entertained without any content at all. If the side effects are well-tolerated, they can take it before work.
A question is, is it possible to advance technology to fulfill the green revolution without changing the value of human creativity due to the creation/advancement of genAI? Or past a certain point, the results of discovering improved health and ecological outcomes will become inextricably linked with discovering new technologies that cause conflict? What actually drives such a process?
I think more people might become interested on why we end up here talking about new possibilities conflicting with stability again and again, similar to how the negative effects of the invention of smartphones are being talked about now.