Haven't watched the video yet, but I don't see the vibrating motor/linear actuator mentioned. I'm assuming this is still included? I'll be switching my Pebble Time 2 order over, these look fantastic.
I don't really need it, but maybe my setup is too simple. I set my laptop monitor to auto-right, external display to auto-left and that's it. Set it and forget it for me.
In the example you compare something with 20 million daily users to the energy usage of 25.000 homes. I feel like comparisons like this mostly work (and feel scary) because we don't really grasp how much bigger 20 million really is. It feels a bit like scaremongering to me (without providing context of AI energy use vs other online activities/services).
This is a great point regarding what we ought to consider when adapting our lifestyle to reduce negative environmental impact:
> In deciding what to cut, we need to factor in both how much an activity is emitting and how useful and beneficial the activity is to our lives.
Although I would extend “our lives” to “society”. His own example with a hospital emitting more than a cruise ship is a good illustration of this; and as a more absurd example it would drastically cut the emissions if we remove all humans and replace them by LLMs (which sort of defeats the entire point, obviously, because LLMs are no longer needed).
Continuing this line of thought, when considering your use of an LLM, you ought to weigh not merely its emissions and water usage, but also the larger picture as to how it benefits the human society.
For example: Is it based on ethically sound approaches? (If it is more like “ends justify the means”, do we even know what those ends are?) What are its the foreseeable long-term effects on human flourishing? Will it (unless regulated) cause a detriment to livelihoods of the many people while increasing the wealth gap with the tech elites? Does it negatively impact open information sharing (willingness to run self-hosted original content websites or communities open to public, or even the feasibility of doing so[0][1]), motivation and capability to learn, creativity? And so forth.
...how do you think you got your job? You ever see those old movies with rows of people with calculators manually balancing spreadsheets with pen and paper? We are the automators. We replaced thousands of formerly good paying jobs with computers to increase profits, just like replacing horses with cars or blacksmiths with factories.
The reality of AI, if AI succeeds in replacing programmers (and there's reason to be skeptical of that) is that it will simply be a "move up the value chain". Former programmers instead of developing highly technical skills will have new skills - either helping to make models that meet new goals or guiding those models to produce things that meet requirements. It will not mean all programmers are automatically unemployable - but we will need to change.
A few questions popped in my head. Can you retain the knowledge to evaluate model output required to effectively help and guide models to do something if you do not do it yourself anymore? For humans to flourish, does it mean simply “do as little as possible”? Once you automated everything, where would one find meaningful activity that makes one feel needed by other humans? By definition automation is about scaling and the higher up the chain you go the fewer people are needed to manage the bots; what do you do with the rest? (Do you believe the people who run the models for profit and benefit the most would volunteer to redistribute their wealth and enact some sort of post-scarcity commmunist-like equality?)
> Can you retain the knowledge to evaluate model output required to effectively help and guide models to do something if you do not do it yourself anymore?
I mean, education will have to change. In the early years of computer science, the focus was on building entire systems from scratch. Now programming is mainly about developing glue between different libraries to suit are particular use case. This means that we need to understand far less about the theoretical underpinnings of computing (hence all the griping about why programmers don't need to write their own sorting algorithms, so why does every interview ask it).
It's not gone as a skill, it's just different.
>For humans to flourish, does it mean simply “do as little as possible”? Once you automated everything, where would one find meaningful activity that makes one feel needed by other humans?
So I had a eureka moment with AI programming a few weeks ago. In it, I described a basic domain problem in clear english language. It was revealing not just because of all the time it saved, but because it fundamentally changed how programming worked for me. I was, instead of writing code and developing my domain, I was able to focus my mind completely on one single problem. Now my experiences with AI programming have been much worse since then, but I think it highlights how AI has the potential to remove drudgery from our work - tasks that are easy to automate, are almost by definition, rote. I instead get to focus on the more fun parts. The fulfilling parts.
>By definition automation is about scaling and the higher up the chain you go the fewer people are needed to manage the bots; what do you do with the rest? (Do you believe the people who run the models for profit and benefit the most would volunteer to redistribute their wealth and enact some sort of post-scarcity commmunist-like equality?)
I think the best precedent here is the start of the 20th century. In this period, elites were absolutely entrenched against the idea of things like increasing worker pay or granting their workers more rights or raising taxes. However, I believe one of the major turning points in this struggle worldwide was the revolution in Russia. Not because of the communist ideals it epoused, but because of the violence and chaos it caused. People, including economic elites, aren't marxist-style unthinking bots - they could tell that if they didn't do something about the desperation and poverty they had created, they would be next. So due to a combination of self interest, and yes, their own moral compasses, they made compromises with the radicals to improve the standard of living for the poor and common workers, who were mostly happy to accept those compromises.
Now, it's MUCH more complicated than I've laid out here. The shift away from the gilded age had been happening for nearly twenty years at that point. But I think it illustrates that concentrating economic power that doesn't trickle down is dangerous - creating constant social destruction with no reward will destroy themselves. And they will be smart enough to realize this.
> AI has the potential to remove drudgery from our work - tasks that are easy to automate, are almost by definition, rote.
I like to think that the best kind of automation when it comes to writing code is writing less code, but instead writing it with strategic abstractions embodying your best understanding of subject matter and architectural vision.
> Maybe it's a negative for you if you already have marketable skills, but a positive for others who want to get in.
I am not fully clear, to get in on what? The skill that is valued less and less? Or on being an LLM prompter? How much would a rational management be willing to pay a prompt writer (assuming they cannot automate that as well in the first place)?
yeah but we don't have time to analyse this for years and years while upping our power consumption. In the end we consume too much dirty power and have to change this, and quickly. AI is worth nothing if the world is burning.
> yeah but we don't have time to analyse this for years and years while upping our power consumption
this is 100 % true. we also don’t have time to debate the morality and necessity of each specific activity for years. if AI energy use is indeed as small as some comments here suggest, ignoring it to focus on improving things like heating, cooling, and transportation could be a better course of action.
>> The problem is the amount of ceremony, silly OOP abstractions, dependency injection, etc.
Your code snippet certainly has a lot of unnecessary ceremony. Why use a builder object at all? Why use a static class with a function to build the builder object?
There is a lot that gets done behind the scenes in createBuilder(). I understand where you're coming from, but this allows you to override any defaults that you don't like, in order to provide your own. I personally still stick to the standard MVC pattern, and don't go crazy with abstractions. I place my business logic within services and inject those in my controllers, but if you were to run a debugger, you would not have to jump through interfaces and other useless abstractions that were a thing of the past (and present if you follow current tutorials and books). I have used Node.JS, and still use it to provide my frontend developers with an environment using Express to build out templates using Gulp for minification/transpilation/compression for use in Umbraco (a .NET Core CMS). My frontend developers don't need to know C#, and can work in standard EJS templates and HTML, but benefit from SCSS and modern JavaScript. I can then build out the Razor syntax for views, and just drop their CSS and JS files directly into the CMS projects.
> BeagleV™ is the first affordable RISC-V board designed to run Linux. Based on the RISC-V architecture, BeagleV™ pushes open-source to the next level and gives developers more freedom and power to innovate and design industry-leading solutions with an affordable introductory price at $149.
Where you have more controlled audiences in a office environment, I find Blazor Server is candidate.
There's not much of upfront transfer. There's some consideration if the latency is excessive high, but there's also benefit of being able to use full netcoreapp (instead of netstandard) lib, and parallelism.
The A300 is wonderful, but there aren't any Zen 2 APU's available. When the next gen APU's with Zen 2 cores come out I'm hoping to build an A300 with a Ryzen 5 APU and 32GB RAM. Seems like a dream dev machine for me.