Thanks for sharing the links, the architectural overview is very insightful.
I'm curious how this approach manages cardinality explosion? Also, how do you handle cases where a user asks for data that requires running multiple queries, specifically where each query depends on the results of the previous one?
> I'm curious how this approach manages cardinality explosion?
The Knowledge Graph explicitly models cardinality and relationships between entities. The compiler uses that to generate SQL that handles it correctly, using e.g. DISTINCT
> Also, how do you handle cases where a user asks for data that requires running multiple queries, specifically where each query depends on the results of the previous one?
Veezoo can generate adaptive plans, so it can decide to wait for a database query to return results before continuing
Thanks for answering! Regarding cardinality, I was actually thinking more about high-cardinality dimensions on the NLU side, e.g., if a user asks for "Sales for [Obscure Company Name]," and you have 10M distinct customers. Does the Knowledge Graph have to index all those values for the mapping to work?
On the adaptive plans, Is that execution logic handled entirely by your deterministic compiler, or does it loop back to the LLM to interpret the intermediate results?
>Does the Knowledge Graph have to index all those values for the mapping to work?
There are both options. You can index them as entities [1] within Veezoo and keep the mapping automatically synchronized with the database. Or decide to not index them, which will make Veezoo e.g. attempt answering the question using string search in SQL.
>On the adaptive plans, Is that execution logic handled entirely by your deterministic compiler, or does it loop back to the LLM to interpret the intermediate results?
The plan is done entirely by the LLM. The VQL steps (i.e. fetching answers from the database) within the plan is where the compiler kicks in.
I realize it’s easy to pattern-match this news to 'hiring in India vs. firing in US' given the current climate, but having worked at Amazon India for 4 years, I can tell you the cuts happen there too.
Amazon has a history of annual restructuring that hits every region. It isn't necessarily a direct relocation strategy so much as their standard operational churn. The 'efficiency' cuts are happening globally, India included.
Sure, but at some point in the past, "Amazon India" was not a thing. Nor was "Microsoft India" and so forth. Surely you can understand what it feels like to be an American tech worker in a super high cost of living area, looking at reduction in headcount and continual offshoring of jobs as time goes by. I live in Seattle area, work at one of these big companies, I work with people in India almost every day and have been to India three times on business. When parts of my department's work was allocated to a new team in India, of course I was nervous about that.
I get the fear, but look at it from the investor's perspective. The US market is tapped out, Amazon is already everywhere it can be.
Amazon isn't expanding in India out of love for the country or a desire to see it grow. They are doing it because Wall Street demands infinite growth every single year. Amazon India went from zero to a market leader in a decade not because of charity, but because that is where the new money is.
To keep the valuation climbing (which sustains everyone's RSUs), they have to capture these emerging markets. If they don't, the stock stagnates, and the compensation model for US tech workers falls apart.
They can capture the market without moving the workforce there. Meta/Instagram/WA have dominated Indian market for a decade now.
It seems like this is pure labor arbitrage. Growth is gone so the only way to increase profits is by cutting costs, with labor force being the top line item.
> They can capture the market without moving the workforce there. Meta/Instagram/WA have dominated Indian market for a decade now.
The former is a logistics company. They need an on-the-ground workforce in places they operate. The latter are social media products, no local workforce of significance needed.
That said, we are in a world where Amazon is able to do labor arbitrage of software-adjacent jobs by moving them to India. That's been happening for more than 2 decades. Nothing short of new laws levying penalties, or a massive consumer boycott will stop that or slow it down.
You are describing a colonial model, extract all the wealth while investing nothing in the local economy. That era is over.
If anything, Meta is the anomaly, not the role model. They should be required to invest more given their dominance, rather than being praised for extracting maximum value with minimum local footprint. Regulators will likely close that gap eventually.
If a foreign entity came into Florida and bought up 35% of the entire retail infrastructure, you bet the US government would regulate it and demand local value capture.
Case in point - US actively forced TSMC and Samsung to build $65B+ of factories in Arizona and Texas to secure domestic interests.
And Chinese/Korean workers being fired while American workers are being hired by their companies would absolutely be correct to see their jobs being offshored
>I get the fear, but look at it from the investor's perspective. The US market is tapped out, Amazon is already everywhere it can be.
Heaven forbid we forget about the investors, and don't forget about the executive compensation!
I mean, seriously, is there no such thing as balance? I'm not saying investors should be arbitrarily shorted, but on the same token it doesn't mean workers need to always take the brunt of the change, which is how it goes down 90% of the time.
If layoffs were seen as executive leadership failures first and foremost it would be a small step toward the right direction of accountability.
>To keep the valuation climbing (which sustains everyone's RSUs), they have to capture these emerging markets.
Fallacy that the stock must continue to rise to the detriment of the workforce that supposedly would benefit. Never minding that RSUs shouldn't be seen as a primary form of compensation to begin with, there is a myriad of things companies can do to maintain the valuation of employee RSUs, like bigger grants.
Secondly, you're assuming to capture these emerging markets, a layoff is a must. In reality, it likely is not. If you have a surplus of resources, deploying them effectively would be a net win, as you re-allocate these folks to higher priority projects and workstreams. The incentive structure that C-Suites have built up since the 1980s however don't align with that, because executive compensation is entirely based around juicing the numbers on a spreadsheet, as opposed to being rewarded for building sustainable businesses.
>If they don't, the stock stagnates, and the compensation model for US tech workers falls apart.
It doesn't, compensation is more broad than RSUs, and could be adjusted in kind. This is a solved problem.
True. This is Globalism at work. If these companies were not selling goods and services globally then they wouldn't have to deal with setting up offices, staff, pressure from local politicians to hire locals around the world.
Companies hiring more in cheap labor countries is quite obvious for long time. In case of Amazon I feel most of the stuff that was cutting edge 2 decades back is now low value work where cost is the only edge.
And the original link about investment in India is also about fulfillment jobs and even worse, “investing in AI”, aka building data centers, which contribute essentially no jobs at all.
Amazon also employs 1.5 million people globally, 350k of which are in corporate. These 16k were corporate. Still sucks for everyone involved, I know a corporate sales guy who got laid off Microsoft and it disrupted his life pretty seriously. As Stalin says one's a tragedy, a millions a statistic.
Since the HN reaction to layoffs almost always is about blaming H1B, here’s a few more things the headline misses:
1. Cuts were global
2. Cuts in US also include H1B employees
3. 16000 roles are corporate roles, not just tech related, H1B program is not generally utilized for those roles
4. Expansion in India is not just tech. Amazon is a big retailer in India. Understandably if you’re seeing revenue growth in India, you will grow corporate presence in India. If Walmart becomes a massive retailer in EU, it will hire EU nationals in EU. That’s not shipping jobs to EU.
> 1. Cuts were global 2. Cuts in US also include H1B employees
Hell no, Amazon has been a top 10 filer of H1-B LCAs for decades. The only H1-Bs being laid off, if any, are the older ones (over 39) to be replaced with cheaper models
https://www.youtube.com/watch?v=uS8LNhxJq9Q
That keeps the facilities here, the local employment options here, the growth here, the tax base here...
We should want more smart people moving to this country. More business creation, more capital, more labor, more output.
Immigration is total economic growth for America, non zero-sum. Offshoring is not only economic loss, but second order loss: we lose the capacity over an extended time frame.
I want the loopholes on H1Bs to be closed. H1B is a great concept to get foreign talent that found domestically. But these days is a shell game that's turned into a way to put shackles on employees who can't job hop. It hurts both groups in the long run.
> want the loopholes on H1Bs to be closed. H1B is a great concept...
There are no loopholes on H1B, it's working exactly as it was intended - replace, not just supplement - American workers with cheaper, more obedient tech slave workers dependent of their master-employer for their survival.
Several of the links of yours are about PERM applications, not H1B.
I agree abusers (employers) should be put on he H1B visa blacklist which already exists.
H1B already mandates that employees be paid within the wage window of their peers. And anecdotally I know several who make more than their citizen peers in the same company same level
Not fully, the problem is much deeper than compensation. Let's not have large companies game around the slots allotted and using various means to get more slots. And put some serious enforcement on how they justify their H1b's to begin with as a start.
> Shouldn't we all want H1B rather than offshoring?
That's my opinion.
However there are issues with who's sucking the tit. If you bring in a bunch of people from outside instead of hiring locals that's not a win for the locals. On the other hand whats the difference for someone in San Francisco if Apple hires a guy from India vs New Jersey? Not much.
And H1B visa's can be low grade indentured servitude.
I am not so sure on that. They raise inflation, home prices, etc. The locals see no real benefit except having to pay more for everything. While more taxes are collected, most of that goes to offsetting just some of the economic pain induced by the people living there.
and it is in fact zero sum. every spot filled in university or company is a spot not taken by a local, as its obvious by the numbers, more local people are not getting admitted into CS programs nor are they being hired. its 100% zero sum when we are looking at these numbers and %s.
If you don't bring more fungible labor into the US, the jobs will be offshored.
Look at what just happened to film labor in 2022-2023. The industry was burgeoning off the heels of the streaming wars and ZIRP. Then the stikes happened.
Amazon and Netflix took trained crews in the Eastern Europe bloc and leveraged tax deals and existing infra in Ireland and the UK. Film production in LA and Atlanta are now down over 75%. Even with insane local tax subsidies - unlimited subsidies in the case or Georgia.
Software development will escape to other cheaper countries. They're talented and hard working. AI will accelerate this.
Then what? America lost manufacturing. I think we've decided that was a very bad idea.
We need to move the cheaper labor here. More workforce means more economic opportunities for startups and innovation. Labor will find a way as long as the infrastructure is here.
De-growth is cost cutting and collapse. Immigration is rapid growth, diversification, innovation, and market dominance.
All those people start buying from businesses here. They start paying taxes here. It supercharges the local economy. Your house might go up in price, but way more money is moving around - more jobs, more growth, second order effects.
America doesn't have the land limits Canada has. And we can set tax policy and regulations to encourage building.
I'd rather be in an America forecasted to hit 500 million citizens - birth or immigration. And I want to spend on their education. I want capital to fund their startup ideas. I want the FTC/DOJ to break up market monopolies to create opportunity for new risk takers and labor capital.
That was the world the Boomers had. Exciting, full of opportunity. That was the world of a rapidly industrializing America.
Right now, the world we have ahead looks bleak. People aren't having kids and we aren't bringing in immigrants. We'll have less consumerism, less labor, and everything will shrink and shrivel and be less than it was.
> If you don't bring more fungible labor into the US, the jobs will be offshored.
Offshoring is not always a substitute for an employee chained to the job by a visa. I'm sure you can get a million and one anecdotes here on HN about the perils of working across timezones, cultures, and legal systems.
If you really think that companies are moving out of country because "there's not enough talent", despite having some of the more relaxed tax codes and most talented universities here: well, sure. That would be hopeless. It also sounds like you're buying snake oil.
They had decades to off shore, and they chose not to. I don't think Ai in the near term (<15 years) is going to change that dial much. If they do leave, there's plenty of talent to fill the void.
> If you really think that companies are moving out of country because "there's not enough talent", despite having some of the more relaxed tax codes and most talented universities here
The US has a huge delta between its great universities and its mediocre ones. There are some smart and sharp kids everywhere in even the lowest ranked schools. But altogether the amount of people who can pass a code screen in the US is pretty low. If you ever interviewed people for a software position in a big tech firm, you'd realize this.
>The US has a huge delta between its great universities and its mediocre ones. There are some smart and sharp kids everywhere in even the lowest ranked schools. But altogether the amount of people who can pass a code screen in the US is pretty low. If you ever interviewed people for a software position in a big tech firm, you'd realize this.
I'm convinced that the code screen functions as a somewhat arbitrary filter/badge of honor.
FAANG and equivalents get tens of thousands of applicants and they cannot hire them all
If too many pass the code screen, they will just make it harder, even though the job hasn't gotten any more difficult.
Or they get failed at system design. Which is BS in many cases.
It's a necessary filter. Again, you need to interview candidates for these jobs to understand. Our industry doesn't have any qualifications, any exam to pass to certify, so there are just a ton of people who can't do the basic job but think they are qualified because we don't have a good way to screen people for this work.
>The US has a huge delta between its great universities and its mediocre ones.
Like any other country, yes.
>But altogether the amount of people who can pass a code screen in the US is pretty low. If you ever interviewed people for a software position in a big tech firm, you'd realize this.
Compared to India? Or is it fine to lower standards of quality when you are paying an 8th of the cost and it turns out most people don't need to be from MIT to contribute?
That's perfectly fine and dandy. But that's not what H1Bs are for.
H1Bs aren't paid 1/8 their counterparts in the same company.
And no, the same applies to India and to China but because the number is small here we pick the small numbers from the rest of the world as well. We don't only hire people from India and China in tech they are just more populous countries so their best workers are far more numerous.
Go to any FAANG in the US and you will see people on H1B from all over Europe, Africa, South America, etc. but Indians and Chinese are the largest group because they are the largest population countries with established pipelines from schools there to schools here to jobs here.
>We don't only hire people from India and China in tech they are just more populous countries so their best workers are far more numerous.
So we are talking H1Bs. Does that mean this small pool of "best foreign talent" also all happen to speak English and are able to communicate their ideas on a team?
>the same applies to India and to China but because the number is small here we pick the small numbers from the rest of the world as well.
Well you're already shifting your point:
> But altogether the amount of people who can pass a code screen in the US is pretty low.
You're criticizing America as an excuse to find people overseas and bring them in. Thanks for proving the fact that H1B is being abused.
So you're telling me your fine taking the time to find the finest H1B workers but not Americans?
If we have say 1M job openings in a field, and only 250k American citizens can pass a screen for that job, then we need to find other people for it, no? Those people will be likely to most common from the most populous countries in the world...
You could use this exact argument to say nobody should ever have children-- children also raise inflation, home prices, etc. And the majority of your property taxes go specifically towards programs which would be unneeded if nobody had any children.
The fact that naive anti-immigration arguments can be copy-pasted unchanged into arguments against having children is a sign that maybe those arguments are stupid. To understand why, you might start with the fact that immigrants also purchase goods and services, and hence pay the salaries of the ~70% of people in this country employed in some way or another by consumer spending.
Children are future taxpayers the majority with parents who were not a tax burden --net positive tax contribution. People without Children benefit from the taxes paid by the children of people who rear children -i.e. people without children aren't "cashing out" their tax contributed retirement --that contribution went to other retirees.
And citizens benefit from the taxes paid by non-citizen immigrants, whether documented or undocumented. Not just income and payroll taxes that might be dodged by under-the-table arrangements, but sales taxes, property taxes (perhaps paid indirectly via rent to a taxpaying landlord), the consumer share (nearly 100%) of tariffs, etc. And much of that tax base is spent on benefits and services that are not accessible to taxpaying non-citizens.
So from that standpoint, immigrants are a /better/ economic deal for the public than children are. At the end of the day, though, it shouldn't matter where people were born if they're contributing to society, and the grandparent post is 100% correct that the whole debate is stupid.
Oh, in that case no w-2 employee pays income taxes, their employer does. I guess we’re all just mooches on society and only the company owners do anything.
No, they just pay sales tax and other taxes on use. I was being sarcastic because you are fundamentally incorrect and as the other comment said, engaging in sophistry.
Grade school math. Look at income tax receipts: the top 5% pay >61% of all income taxes.
You can try and split hairs with "sales taxes" and "payroll taxes" and try to shimmy things into some anti-capitalist stance ("but the companies benefit from their labor!!!," "renters pay property taxes indirectly!"), but the overwhelming majority of all tax payments come from a small percentage of individuals.
Why does this matter? The government spends X dollars each fiscal year, divided by the number (N) of people. Most people aren't paying X/N.
The government would not be able to fund every social program or services if it weren't for these receipts, which, most people cannot afford to pay. Even 100% of the majority of salaries can't cover this amount.
> Why does this matter? The government spends X dollars each fiscal year, divided by the number (N) of people. Most people aren't paying X/N.
It matters because we don't know if these people are being taxed more proportionately or less. Like, Elon Musk pays more tax than you or I, but he probably pays at a much lower rate.
What you don't want (from an equity and fairness perspective) is for people with more money to pay a lower rate of tax. That will cause problems.
From a total population perspective, given some amount of money S it doesn't really matter who pays it (except for downstream impacts around fairness and elections).
However, your original point was:
> The vast majority of adults and their children will never pay their tax burden proportionately.
I would argue that this is incorrect, everyone pays some proportion of their income in income/sales/property/estate taxes. And really, your point about who pays the majority of US federal taxes doesn't actually support your point.
Finally, I would note that I mostly replied because I really hate those top x% comparisons as they're deceptive without looking at the proportion of income earned.
> Government could not afford to provide the services they provide if these taxes weren't paid, full stop.
Of course they could. Taxation is not necessary in the short term for a government to provide services (especially if we're talking about the US which both issues its own currency and benefits from massive foreign demand for its debt).
Over the long term, taxation needs to at least pay back the debt but that long-term appears to be much longer than I would have expected (when was the last time the US government ran a surplus?).
Immigrants pay social security taxes, unemployment taxes, ... that they also will never be able to benefit from. Those are purely for the benefit of US citizens
There is a good case for vetted legal immigration (there is need and they fill that unmet need), no question; however, that should not be at the expense of the local population, regardless of country. In other words, the locals should not suffer a depressed job market because of immigration. The whole reason for a state to exist is to first and foremost look after the wellbeing of its citizens that elect the bodies of government.
I'm not sure where you're getting that from in my comment. I never said US citizens should want H1Bs for everyone with zero vetting, only that they are a net tax positive.
It's not a dichotomy of maintaining the status quo or getting rid of H1b completely. At least in big tech companies, they do follow labor market tests and prevailing wage tests and so on that are designed to vet that there is an unmet need and that visa holders aren't underpaid. I won't deny there are visa mills and consultancies that game the system and pretty much explicitly just hire cheap foreign labor, but this is a thread about H1B in the context of Amazon layoffs, not InfoSys layoffs.
It depends if the immigrant is hired because the native worker is deemed too expensive. In this case, it contributes to reducing contributions through wage suppression.
If you have access to data that shows big tech is preferentially hiring visa holders over US citizens you should get on that class action lawsuit right away. That's probably hundreds of thousands or even millions per person in lost wages, and even after lawyers take their 30% cut, that's still a sizable chunk.
It's anecdata, but a college friend who now works at as a manager in an IT/Data consultancy in my birth country in the EU told me bluntly that they prioritized hiring foreigners as they were 20% cheaper.
Given that the company sponsors them and come from lower incomes countries, they are ready to accept lower wages. If they do it I don't see why everyone wouldn't be doing the same.
It's of course hard to prove formally as those companies will comply with regs to make it look like they aren't discriminating (fake job ads, etc...). By the way in the US Indian consultancies got busted for this.
Based on the "Worst Case Housing Needs: 2025 Report to Congress" released in late 2025, the U.S. Department of Housing and Urban Development (HUD) found that foreign-born population growth accounted for approximately two-thirds of the increase in nationwide rental demand between 2021 and 2024.
Of course. In any growing services-based economy you will have foreign born population growth. If you eliminate that population growth, economic growth will decline with it.
If we were a growing manufacturing-based economy that wouldn't be the case as much.
Yep. The negativity around H-1Bs is centered around using them for low/mid-level roles in the pursuit of wage suppression, racial/caste discrimination with hiring managers abusing the system to get their friends in, and the tech industry unnecessarily hogging them when we really need them in niche industries (e.g. nuclear engineering).
Trump made the cost change some months ago to address those concerns but I haven’t seen any studies showing whether or not those changes had a positive effect or not.
Because they can hire 5 programmers in India for the cost of 1 in America, and American programmers aren't 5x better than Indian ones ? Amazon is an online shop, not a jobs program. I'm sure they would rather eliminate a position altogether even more than sending it to India.
We should want open borders. Immigration is a significant net positive. But we can settle for controlled immigration with liberal limits.
H1-B is stupid on its face. You're seriously telling me that this software engineering job absolutely cannot be filled by an American? That doesn't pass the laugh test.
> H1-B is stupid on its face. You're seriously telling me that this software engineering job absolutely cannot be filled by an American? That doesn't pass the laugh test.
The job description is a senior full stack product developer fluent in all programming languages and frameworks. Salary is $70,000/year. Somehow they can never find Americans to fill those jobs. They'll go on Linkedin complaining that Americans are too lazy and don't have the right hustle culture and talk about made up concepts like work life balance when the bosses demand 100 hour work weeks without overtime pay.
That seems low. Is it a corporate strategy to set a low salary and when nobody local fills it (because it's below the competitive rate) they get to hire H1-B?
No, because H1B has pay requirements. As someone who went through the process with Amazon I can confirm that they definitely do offer you a salary that is in line with the local market. There might be lower incentive for raises down the line, but that's a conspiracy theory at best
By law. H1b requires the wages to be greater than the prevailing wage for similar positions in the region. They are published by DoL: https://flag.dol.gov/wage-data/wage-search
For this kind of experience, you'd be looking for level 2 _minimum_ and likely level 3. For King County in WA it's right now $149240 and $180710 respectively. Level 4 wage is $212202, btw.
The H1B requirements are even higher, but also WA state law requires software developer salaries to be 3.5 x minimum wage x 52 weeks per year. Currently, that is $124k+, because minimum wage is $17.13 per hour.
Our competitors in another country will have no problem building those products.
Then they'll be sold in America to American consumers.
Then our industry deflates, because we can't compete on cost or labor scale / innovation.
If we put up tariffs, we get a short respite. But now our goods don't sell as well overseas in the face of competition. Our industries still shrink. Eventually they become domestically uncompetitive.
So then what? You preserved some wages for 20 years at the cost of killing the future.
I think all of these conversations are especially pertinent because AI will provide activation energy to accelerate this migration. Now is not the time to encourage offshoring.
Immigration isn't "shipping the job to India". It's bringing the labor here and contributing to our economy. This might have a suppressive force on wages, but it lifts the overall economy and creates more opportunity and demand.
Offshoring is permanent loss. It causes whatever jobs and industry are still here to atrophy and die. The overall economy weakens. Your outlook in retirement will be bleaker.
If you have to pick between the two, it's obvious which one to pick.
And that's the general problem. People don't care about the overall economy when wages are going down and cost of living is going up. Even myself, I couldn't care less about the overall health of the economy. I care about being able to subsist mine and my family's life style, put food on the table, someday own a home, not live paycheck to paycheck because all the jobs are paying below a living wage, etc.
I'm extremely fortunate to make the salary that I do, but I know plenty of others not so fortunate, in other fields that don't pay nearly as well as tech does, and probably never will. The answer can't be "go into tech" nor should it be "let's suppress wages so labor isn't so expensive for our domestic companies." And obviously offshoring isn't great either.
We can still import talent without suppressing wages, by not abusing the program and actually only importing for roles that truly, beyond all reasonable doubt, could not be filled by a domestic worker.
Usually the next step of this failed discourse is to explain that locals are so entitled that they don't want to do hard jobs for the minimum wage, due to decades of wage suppression done thanks to immigration.
In France, being a cook used to pay very well, now that most cooks in Paris are from India or Sri Lanka, often without a proper visa or at the minimum wage, no local wants to do this anymore (working conditions are awful).
The industry then whines loudly about "the lack of qualified (cheap) workers"
Turns out this is a difficult problem with no one good solution. Subjecting labor to a race to the bottom is probably the most efficient individual system from a capitalist standpoint, but it destroys itself just as much as your customers can no longer afford to buy most of the products made. The selfish strategy ruins the entire system if everybody does it.
Capitalism and Communism have opposite problems. Communism attempts to manage the markets from a top down approach, making it relatively easy to handle systemic problems but almost impossible to optimize for efficiency because there is far too much information that doesn't make it to the top. Capitalism by contrast pushes the decisions down to where the information is, allowing for excellent efficiency but leaving it blind to systemic problems.
So the best solution is some kind of meet in the middle approach that is complex and ugly and fosters continual arguments over where lines should be drawn.
Innovation is why american salaries in tech are so high. They funded trillion dollar companies.
If that becomes so much of a commodity that some other countries can do it for pennies on the dime, then yes. Salaries will deflate. But we sure aren't offshoring (nor using most H1bs) to see more innovation. Quite the opposite.
Tech isn't manufacturing where the biggest supply line wins by default. That's why I'm not holding my breath that the US isn't going to be outcompeted on talent anytime soon. Of anything, its own greed will consume it.
You say "we should want open borders" then argue for something that is objectively not open borders. "Open borders" and "controlled immigration" are diametrically opposed things, regardless of whatever liberal limits you're imagining. Almost nobody is arguing for zero immigration.
An LLM is optimized for its training data, not for newly built formats or abstractions. I don’t understand why we keep building so-called "LLM-optimized" X or Y. It’s the same story we’ve seen before with TOON.
Yeah fwiw I agree. I was impressed at how well the agents were able to understand and write their invented language, but fundamentally they're only able to do that because they've been trained on "similar" code in many other languages.
It’s the standard enshittification lifecycle: subsidize usage to get adoption, then lock down the API to force users into a controlled environment where you can squeeze them.
Like Reddit, they realized they can't show ads (or control the user journey) if everyone is using a third-party client. The $200 subscription isn't a pricing tier. It's a customer acquisition cost for their proprietary platform. Third-party clients defeat that purpose.
This analysis dismisses MCP by focusing too narrowly on local file system interactions. The real value isn't just running scripts; it's interoperability.
MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code. Comparing it to local scripts is like calling USB a fad because parallel ports worked for printers. The power is standardization: write once, support every AI client.
Edit:
To address the security concerns below: MCP is just the wire protocol like TCP or HTTP. We don't expect TCP to natively handle RBAC or prevent data exfil. That is the job of the application/server implementation.
> To address the security concerns below: MCP is just the wire protocol like TCP or HTTP. We don't expect TCP to natively handle RBAC or prevent data exfil. That is the job of the application/server implementation.
That is simply incorrect. It is not a wire protocol. Please do not mix terminology. MCPs communicate via JSON-RPC which is the wire protocol. And TCP you describing as wire protocol isn't a wire protocol at all! TCP is a transport protocol. IT isn't only philosophy, you need some technical knowledge too.
Fair point on the strict terminology, I was using 'wire protocol' broadly to mean the communication standard vs. the implementation.
A more precise analogy is likely LSP (Language Server Protocol). MCP is to AI agents what LSP is to IDEs. LSP defines how an editor talks to a language server (go to definition, hover, etc.), but it doesn't handle file permissions or user auth, that’s the job of the OS or the editor.
Would you say MCP is a protocol (or standard) similar to how REST is a protocol in that they both define how two parties communicate with each other? Or, in other words, REST is a protocol for web APIs and MCP is a protocol for AI capabilities?
also REST is less about communicating, more about the high level user interface and the underlying implementations to arrive at that (although one could argue that’s a form of communicating).
the style does detail a series of constraints. but it’s not really a formal standard, which can get pretty low level.
—
standards often include things like MUST, SHOULD, CAN points to indicate what is optional; or they can be listed as a table of entries as in ASCII
> standard (noun): An acknowledged measure of comparison for quantitative or qualitative value; a criterion
note that a synonym is ideal — fully implementing a standard is not necessary. the OAuth standard isn’t usually fully covered by most OAuth providers, as an example.
—
> The Model Context Protocol (MCP) is an open standard and open-source framework
MCP, the technology/framework, is like Django REST framework. it’s an implementation of what the authors think is a good way to get to RESTful webpages.
MCP, the standard, is closer to REST, but it’s more like someone sat down with a pen and paper and wrote a standards document for REST.
They aren’t the same, but the have some similarities in their goals albeit focussed on separate domains, i.e. designing an interface for interoperability and navigation/usage… which is probably what you were really asking (but using the word protocol waaaaaaay too many times).
Thanks, and call me wrong, I think "Protocol" in MCP is somehow misused. Sure it is somehow a protocol, because it commits on something, but not in the technical sense. MCI (Model Context Interface) would probably the better name?
I agree that interface would be a better name than protocol, but Model Context Integration/Integrator would be even better as that is it's core intent: To integrate context into the model. Alternatively, Universal Model Context Interface (or integrator) would be an even better name imo, as that actually explains what it intends to do/be used for, whereas MCP is rather ambiguous/nebulous/inaccurate on the face of it as previously established further up-thread.
That said, I think as the above user points out, part of the friction with the name is that MCP is two parts, a framework and a standard. So with that in mind, I'd assert that it should be redefined as Model Context Interface Standard, and Model Context Interface Framework (or Integration or whatever other word the community best feels suits it in place of Protocol).
Ultimately though, I think that ship has sailed thanks to momentum and mindshare, unless such a "rebranding" would coincide with a 2.0 update to MCP (or whatever we're calling it) or some such functional change in that vein to coincide with it. Rebranding it for "clarity's sake" when the industry is already quite familiar with what it is likely wouldn't gain much traction.
Wow, this is great. Calling it UMCI would have saved me a lot of confusion in the first place. But yeah I think the ship has sailed and it shows that a lot of things there were cobbled together in a hurry maybe.
> MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code.
I don't think MCP is what actually enables that, it's LLMs that enable that. We already had the "HTTP API" movement, and it still didn't allow "without custom glue code", because someone still had to write the glue.
And even with MCP, something still has to glue things together, and it currently is the LLMs that do so. MCP probably makes this a bit easier, but OpenAPI or something else could have as easily have done that. The hard and shitty part is still being done by a LLM, and we don't need MCP for this.
The thing is, current models are good enough that you can mostly achieve the same by just putting a markdown file[1] on your server that describes their API, and tell people to point their agent at that.
For complex interactions it might be marginally more efficient to use an MCP server, but current SOTA models are good at cobbling together tools, and unless you're prepared to spend a lot of time testing how the models actually end up interacting with your MCP tools you might find it better to "just" describe your API to avoid a mismatch between what you expose and what the model thinks it needs.
[1] Slightly different, but fun: For code.claude.com, you can add ".md" to most paths and get back the docs as a Markdown file; Claude Code is aware of this, and uses it to get docs about itself. E.g. https://code.claude.com/docs/en/overview.md )
adding MCP servers isnt free, they take space in your context and if you are working at anything bigger than a startup, none of the companies allow thier workers to connect to other companies' MCPs and they can easily make thier MCP a data exfil machine
> MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code.
My agent writes its own glue code so the benefit does not seem to really exist in practice. Definitely not for coding agents and increasingly less for non coding agents too. Give it a file system and bash in a sandbox and you have a capable system. Give it some skills and it will write itself whatever is neeeded to connect to an API.
Every time I think I have a use case for MCP I discover that when I ask the agent to just write its own skill it works better, particularly because the agent can fix it up itself.
The skill/CLI argument misses what MCP enables for interactive workflows. Sure, Claude can shell out to psql. But MCP lets you build approval gates, audit logs, and multi-step transactions that pause for human input.
Claude Code's --permission-prompt-tool flag is a good example. You point it at an MCP server, and every permission request goes through that server instead of a local prompt. The server can do whatever: post to Slack, require 2FA, log to an audit trail. Instead of "allow all DB writes" or "deny all," the agent requests approval for each mutation with context about what it's trying to do.
MCP is overkill for "read a file" but valuable when you need the agent to ask permission, report progress, or hand off to another system mid-task.
You end up wasting tokens on implementation, debugging, execution, and parsing when you could just use the tool (tool description gets used instead).
Also, once you give it this general access, it opens up essentially infinite directions for the model to go to. Repeatability and testing become very difficult in that situation. One time it may write a bash script to solve the problem. The next, it may want to use python, pip install a few libraries to solve that same problem. Yes, both are valid, but if you desire a particular flow, you need to create a prompt for it that you'll hope it'll comply with. It's about shifting certain decisions away from the model so that it can have more room for the stuff you need it to do while ensuring that performance is somewhat consistent.
For now, managing the context window still matters, even if you don't care about efficient token usage. So burning 5-10% on re-writing the same API calls makes the model dumber.
> You end up wasting tokens on implementation, debugging, execution, and parsing when you could just use the tool (tool description gets used instead).
The token are not wasted, because I rewind to before it started building the tool. That it can build and manipulate its own tools to me is the benefit, not the downside. The internal work to manipulate the tools does not waste any context because it's a side adventure that does not affect my context.
Maybe I'm not understanding the scenario well. I'm imagining an autonomous agent as a sort of baseline. Are you saying the agent says "I need to write a tool", it takes a snapshot, and once it's done, it rewinds to the snapshot but this time, it has the tool it desired? That's actually a really cool idea to do autonomously!
If you mean manually, that's still interesting, but that kind of feels like the same thing to me. The idea is - don't let the agent burn context writing tools, it should just use them. Isn't that exactly what yours is doing? Instead of rewinding to a snapshot, I have a separate code base for it. As tools get more complex, it seems nice to have them well-tested with standardized input and output. Generating tools on the fly, rewinding, and using tools is just the same thing. You even would need to provide some context that says what the tool is and how to use it, which is basically what the mcp server is doing.
> Are you saying the agent says "I need to write a tool", it takes a snapshot, and once it's done, it rewinds to the snapshot but this time, it has the tool it desired? That's actually a really cool idea to do autonomously!
I'm basically saying this except I currently don't give the agent a tool yet to do it automatically because it's not really RL'ed to that extend. So I use the branching and compaction functionality of my harness manually when it should do that.
> If you mean manually, that's still interesting, but that kind of feels like the same thing to me.
It's similar, but it retains the context and feels very naturally. There are many ways to skin the cat :)
Yeah, it might be useful for some people to stop thinking about MCP in relation to agentic harnesses. Think more about environments you don't control, such as Claude Web or ChatGPT. MCP is just a standard (fallible like most standards) but has gained traction and likely to stick around. Extremely useful for non technical people if all their apps/agents are communicating with each other (mcp).
Useful for service providers who want to expose themselves to technical consumers without having to write custom sdk's that consume their restful/graphql endpoints.
The best implementation of MCP is when you won't even hear about it.
I definitely agree that it is currently pretty shit and unnecessary for agentic coding, cli's or some other solutions will come along. (the premise being the same though, searchable/discoverable and executable tools in your agentic harness is likely going to be a very good thing instead of having to document in claude.md which os and cli specific commands it should run (even though this seems far more powerful and sensible at this point in time))
Doesn't that require a complete lack of concern on the part of the postgres side? I feel like I'm missing something in terms of why anyone would even ever allow that.
In the same way giving an LLM shell access requires a complete lack of concern.
You can give an LLM a shell into a container sandbox with basically nothing in it, or root shell on a live production server, or anything in between. Same goes for how much database access you want to give an LLM with your MCP shims.
you can ask the LLM for an adhoc report. it can look at the schema, run the queries and give you the results. of course you can just give it read access.
I'm curious about the economics of canceling a deal like this. Since the editor spent significant time reviewing the drafts, did the contract require you to reimburse the publisher for those costs or return the advance?
It is not a protected term, so anything is state-of-the-art if you want it to be.
For example, Gemma models at the moment of release were performing worse their competition, but still, it is "state-of-the-art". It does not mean it's a bad product at all (Gemma is actually good), but the claims are very free.
Juicero was state-of-the-art on release too, though hands were better, etc.
> It's just marketing. [...] It is not a protected term, so anything is state-of-the-art if you want it to be.
But is it true?
I think we ought to stop indulging and rationalizing self-serving bullshit with the "it's just marketing" bit, as if that somehow makes bullshit okay. It's not okay. Normalizing bullshit is culturally destructive and reinforces the existing indifference to truth.
Part of the motivation people have seems to be a cowardly morbid fear of conflict or the acknowledgment that the world is a mess. But I'm not even suggesting conflict. I'm suggesting demoting the dignity of bullshitters in one's own estimation of them. A bullshitter should appear trashy to us, because bullshitting is trashy.
The scale is there. I'm scraping, cleaning, token efficientizing dozens of sources every single hour. The lack of monies for embedding everything was a temporary problem.
in the direction of "empowering the public with new capabilities they didn't have before", Scry offers, with the copy and paste of a prompt and talking with an agent:
1) Full readonly-SQL + vector manipulation in a live public database. Most vector DB products expose a much narrower search API. Basically only a few enterprise level services let you run arbitrary SQL on remote machines. Google BigQuery gives users SQL power, but it mostly doesn't have embeddings, connect public corpora, have as good of indexes, and doesn't have support an agentic research experience. Beyond object-level research, Scry a good tool for exploring and acquiring intuitions about embedding-space.
2) An agent-native text-to-SQL + lexical + semantic deep research workflow. We have a prompt that's been heavily optimized for taking full advantage of our machine and Claude Code for exploration and answering nuanced questions. Claude fires off many exploratory queries and builds towards really big queries that lean on the SQL query planner. You can interrupt at any time. You have the compute limits to do lots of exhaustive exploration--often more epistemically powerful than finding a document often, is being confident than one doesn't exist.
3) dozens of public commons in one database, with embeddings.
My rule for modern TVs:
1. Never connect the TV panel itself to the internet. Keep it air-gapped. Treat it solely as a dumb monitor.
2. Use an Apple TV for the "smart" features.
3. Avoid Fire TV, Chromecast, or Roku.
The logic is simple, Google (Chromecast) and Amazon (Fire TV) operate on the same business model as the TV manufacturers subsidized hardware in exchange for user data and ad inventory. Apple is the only mainstream option where the hardware cost covers the experience, rather than your viewing habits subsidizing the device.
That's exactly my own thought process. I don't pretend that Apple is saintly, but their profit model is currently to make money through premium prices on premium products. They have a lot to lose, like several trillion dollars, in betraying that trust.
A large % of their revenue comes from app store/services and they have incentives to lock you into the ecosystem, sell you digital shit and take a cut off of everything.
I saw an ad for apple gaming service in my iphone system settings recently !
That's not to say that Google isn't worse but let's not pretend Apple is some saint here or that their incentives are perfectly aligned with the users. Hardware growth has peaked, they will be forced to milk you on services to keep growing revenue.
Personally I'm looking forward to Steam Deck, if that gets annoying with SteamOS - it's a PC built for Linux, there's going to be something available.
True. The best option currently is to buy an Nvidia Shield TV, unlock the bootloader and install a custom Android ROM. The hardware is great, and if you install a custom ROM, you have more freedom than Apple TV will ever give you.
The comment about the ad wasn't about the ad istelf. It was an apple ad for an apple service, so they didn't make any money at all on the ad. The remark was about the service Apple was pushing, and just how intrusively.
But the comment OP was replying to was about their ad services and what incentive the company has to operate in good faith or risk impacting sales to the majority of their business.
Correct, and didn’t sell your data to do it. I’m okay with that. If I trust Apple with basically my life stored on their phone and in their cloud, and processing payments for me, and filtering my email, and spoofing my mac address on networks (and,and,and), it seems foolish to be worried about them knowing what tv shows I like to watch at night too. At least to me. It’s gonna be a sad day when Tim leaves and user privacy isn’t a company focus anymore.
Services are 25% and are the only one growing/they can grow - that means all focus is going to be on expanding that revenue = enshitification.
Hardware is now purely a way to get you on to the app store - which is why iOS is so locked down and iPad has a MacBook level processor with toy OS.
If you stop looking at the marketing speak and look at it from a stock owner perspective all the user hostile moves Apple is double speaking into security and UX actually make a lot more sense.
Hardware is still 3x the revenue of services, and though it has a lower margin is the bulk of the companies profit. Apple was 3% of the PC market in 2010 and is 10% today, while Android is 75% of the global cellphone market - there's plenty of room for growth in hardware... if you stop looking at the marketing speak, whatever that means.
I don’t see how this really changes the underlying problem of the device pays on you and then they sell that information to the highest bidder? I’m not reaching for a financial report to fix that.
Apple doesn't sell information, they sell access to eyeballs. Quite a big difference. The whole point of first OPs point was that ad revenues to Apple are not worth hurting the other parts of their business built around privacy. Pointing out that Apple shows ads for owned services within their own OS isn't a case otherwise.
Apple absolutely does allow wholesale data harvesting by turning a blind eye to apps that straight up embed spyware SDKs.
This isn’t some hypothetical or abstract scenario, it’s a real life multi billion dollar a year industry that Apple allows on their devices.
You can argue that this is not the same thing as the native ad platform that they run and I’d agree but it’s also a distinction without a meaningful difference.
All you've done is move the goal posts, and it's not even ads related. I'm not entirely certain what you're arguing, other than having some feelings about Apple.
Like another comment mentioned I'm ready to go back to torrenting. Im currently paying for 4 streaming service subscriptions (if you count YouTube premium) where I have super segmented and annoying search UX, and Apple won't even let me pay for their service in my EU county (Croatia). And the DRM story is ridiculous. I'll just setup ARR stack and have a better experience than I can pay for - for free.
Jellyfin + Arr stack would take a couple of hours to setup and cost $10/month for a seedbox in Europe, but it's not as convenient as downloading an app and logging in.
If it was just one app or even two I would agree but there's :
- Netflix
- HBO max
- Sky Showtime
- Amazon Prime
- Apple TV+
- Disney+
This is just the stuff I watched this year.
Add in all the region locks, also not all the services having rights to local dubs despite them being available (more for children's stuff but still relevant, Disney+ is unusable for me because of this)
Netflix used to have a catalog worth keeping the subscription on, nowadays I maybe get to watch something once a quarter and keep it on for kids stuff.
Streaming is not convince anymore it's a shitshow.
I think a jellyfin/ARR/Seedbox setup is going to be the solution this year.
> I don't pretend that Apple is saintly, but their profit model is currently to make money through premium prices on premium products
Is this statement based on anything other than Apple marketing materials, perhaps a meaningful qualification from an independent third party? I worry this falsehood is being repeated so much it has become "truth".
For some reason, some people have this inexplicable rose-tinted vision of Apple. Until they release the source code of their products, the only rational stance is to treat their software as malware.
If further evidence is necessary, any Apple device that I have owned pings multiple Apple domains several times per minute, despite disabling every cloud dependency that can be disabled. The roles of the domains are partially documented, but traffic is encrypted and it is impossible to know for sure what information Apple is exfiltrating. It is certainly a lot more than a periodic software update check. It certainly seems that Apple is documenting how people interact with the devices they own very closely. That's an insane amount of oversight over people's lives considering that some (most?) people use their phones as their primary computer.
I just opened Activity Monitor - a process called "dasd" is the 5th largest consumer of CPU time. What does it do? Apple does not want you to know. Apple also will not let you disable it. Apple will not even tell you if this process is legitimate (it is signed by "Software Signing" lmao).
$ man dasd
No manual entry for dasd
There are like two dozen processes like this, half of which open network connections despite me never invoking any Apple services or even built-in apps. macOS has basically become malware.
Even excusing that daemon, here is a list of processes which have attempted to contact Apple in the past 24 hours, according to Little Snitch. I am certain this is not even a complete list, because macOS is closed source and likely can bypass application firewalls altogether:
Again, I have never used iCloud/Apple services, turned off all available telemetry options and did not open any Apple applications while all this took place (I only use Firefox and iTerm). Almost all of these processes lack a man page, or if they have one, it's one-line nonsense which explains nothing. This is beyond unprofessional.
The scheduling shouldn't be the 5th largest consumer of CPU. The question is what is it scheduling. Collecting data about user behavior would be a background task, you know..
Absence of evidence isn't evidence of absence, but it certainly rhymes. Is there proof that Apple is monetizing our data with third parties? It's very clear how almost every other major company is, but Apple's been reasonably respectful about it.
Google is also vehemently opposed to selling your data to third parties. That's how they keep themselves as the middleman between advertisers and users. What they do is allow detailed behavioral targeting. Apple prefers to expose contextual targeting data to advertising instead. Apple is also better about not letting advertisers run random scripts.
But frankly the difference between the two companies seems more a matter of degree than kind. It's not like Apple has a strong, principled stance against collecting data. They have a strong principled stance against other ad networks collecting user data, which looks a lot like anticompetitiveness. Their first party software collects identifiable data on you regardless of whether you opt out. They just avoid using that to target you if you opt out.
The reason Apple says their advertising doesn't track you is because they define "tracking" as purchasing third party data, not first party data collection.
What falsehood? That apple's profit mix is much less advertising than its competitors is just a fact about their incentives in the moment. He didn't really go all that far in claiming anything beyond that being better than the alternative of being mostly an advertising company.
My only * to this would be Google Chromecast devices directly if you already have them.
They have an option (buried way under settings) to make the home-screen apps only.
> Turn on Apps only mode
> From the Google TV home screen, select Settings Settings and then Accounts & Sign In.
> Select your profile and then Apps only mode and then Turn on.
It also makes the device significantly more performant.
With a bit of fiddling, Android TV can be as good as Apple TV in terms of privacy. Not out of the box, of course, but ADB can remove advertising/surveillance related APK files from most devices sold in big box stores and there are open-source, alternative clients to YouTube and a few other platforms available due to the popularity on the underlying AOSP platform. The same is possible to varying extents on smart TVs that use Android TV as their OS.
You can even completely replace Google's sponsored-content-feed launcher/homescreen with an open source alternative that is just a grid of big tiles for your installed apps (FLauncher).
For me, SmartTube with both ad-blocking and sponsor block is the killer feature of Android TV as a platform.
If you're into local network media streaming, Jellyfin's Android TV app is also great. Their Apple TV app is limited enough that people recommend using a paid third party client instead. And that's usually inevitably the case with Apple's walled gardens... The annual developer fee means things that people would build for the community on AOSP/Android are locked behind purchases or subscriptions on iOS and Apple TV.
It never occurred to me that that's why all the macOS utilities cost money. (I mean not literally all but way more basic stuff than you'd ever think to pay for on Windows or Android). I did figure Apple encouraged it because of their massive cut off the revenue but i forgot they charge devs to publish in the first place.
MacOS isn't as locked down as iOS or Apple TV (yet) unless you publish via the Mac App Store, but a secondary factor is that Apple customers expect to pay to solve a problem without having to think about it.
The good is that the above norm encourages the creation of high quality software. The bad is that, by the same token, some ideas that would be free/libre community projects on other platforms are instead paid utilities in Apple's walled garden, especially on iOS and Apple TV.
>It never occurred to me that that's why all the macOS utilities cost money
All macOS utilities absolutely don't cost money. There are countless free macOS utilities in the Mac App Store, as well as open source utilities for macOS specifically too.
I like the idea, but these KODI-based devices far too limited, they essentially only serve as media players for local content. For example, streaming Youtube is difficult and a poor experience relative using VacuumTube on desktop Linux. It's even harder to get a browser to work to stream from websites like Pluto and Flixer, especially if you want an adblocker. I haven't found a better option than an upscaled Linux DE on a mini-PC so far (however, see KDE Plasma Bigscreen).
Also, you can buy a more capable used ThinkCenter micro for less money, so the value proposition isn't exactly great.
I wouldn't expect KODI/OSMC to provide an unofficial YT client. However, the "app" availability issue is a big one for devices like this if they are to compete with spyware-ridden Android TV boxes on one hand and Linux HTPCs on the other hand. The Android TV boxes are cheap and support all streaming platforms. The Linux HTPCs are free (as in freedom), typically far more powerful (can double as consoles/emulators) and don't restrict the user in any way.
Use a PC for "smart" features. Used PC hardware is cheap and plenty effective. And the Logitech K400 is better than any TV remote.
No spying (unless you run Windows). Easy ad blocking. No reliance on platform-specific app support. Native support for multiple simultaneous content feeds (windows) - even from different services.
And it's not like it's complicated. My parents are as tech-illiterate as they come and they've been happily using an HTPC setup for over well over a decade. Anyone who can operate a "Smart TV" can certainly use a web browser.
Of course that's a viable option, but likely uses far more electricity in a year and unless you're going the high seas, unlikely to always get a better 4k HDR resolution from streaming services.
Unlikely, Apple TV is itself a "PC", not much different.
An actual PC doesn't cost much for electricity in a year either (say $30/year headless for watching several hours a day and sleep mode the rest). Make it an ARM and it will be quite less.
I have the same setup and have never looked back. My kids can control the TV now via the browser instead of asking me to fiddle with a smartphone, and I can easily block e.g. YouTube via the hosts file. The ability to have multiple streaming services open in different tabs and reading online reviews all on the same screen is also vastly superior to any UX offered by e.g. Chromecast or similar devices.
100%. Confirmed by my Firewalla. These and HomePods only access apple.com and icloud.com domains unless you're using apps. No mysterious hard coded IP addresses. Apple TV also has the best hardware, by far.
I used to work in the industry. I know the guys responsible for real-time data capture from various platforms like Roku and Visio.
I 100% agree, and I own very nice LG TVs. They are not connected to the internet. They each have an Apple TV and that is their only way that they get video, and can't send data out.
I agree with you except for the Apple TV part. I use a mini-PC running Ubuntu and use a wireless keyboard with integrated touchpad to control it, and it works wonderfully and has a much better user experience than the Chromecast I was using before - a product which has progressively become more and more shitty over the years to the point of being unusable.
An Apple TV is probably also OK, but likely also much more expensive. Also, Apple is a company that is and always has done all they could to lock down their platforms, lock in their users and seek exorbitant fees from developers releasing to their platform.
I used to use a NUC with a K400 as well (and a Logitech Harmony (RIP)), and the Apple TV is a way better experience.
The Apple TV remote is way more useable, and HDMI CEC just works™, which never ever was true with the NUC. I really like the client-server model - the Apple TV is my dumb front end for Plex, Steam Link, and so on. It also is well supported by every streaming service.
All of the Apple TV apps are designed with a UI for a TV and remote, not a user sitting two feet from a computer with a keyboard and mouse, and are way easier to use sitting on a sofa then a keyboard + browser combo.
I could fiddle with the NUC and make it work, but it was not family friendly. In general, the "it just works" factor is extremely high, which I could not say for the NUC.
If Apple ever goes evil, I'll just switch to whatever the best solution is when that happens (maybe a rooted Android TV device?). It's not like I'm marrying it. An Apple TV is $150. I've gotten 4 years out of my current one. The cost is negligible.
As I've gotten older, I've really come to value the "it just works" factor. I don't have time or energy for fiddly stuff anymore. After I put in the time to set something up, I want it to be rock solid. To each their own though.
> and use a wireless keyboard with integrated touchpad to control it.
Which wireless keyboard do you use? I've pretty much exact same setup - TV + Linux Mint + Logitech K400+. I'm just looking to see if there are better options for K400+
can't wait for valve to release the new controller with touchpads. Should be more compact than a keyboard and paired with some voice recognition would make the need for keyboard almost obsolete for smarttv usecase
I believe HDMI has support for sharing internet since 1.4 and I wouldn't be surprised to see TV makers attempting to leverage this in the future to get around not connecting your TV directly to internet.
I'm not any more in the ecosystem than an Apple ID and airpods, and it is just fine. The directional spatial audio with the airpods is cool, but we also use other BT headphones with it. I use the ATV almost exclusively for Jellyfin/Infuse.
If these things include WiFi hw it's not so simple.
You'd likely be surprised what proprietary WiFi-enabled consumer products do without your knowledge. Especially in a dense residential environment, there's nothing preventing a neighbor's WiFi AP giving internet access to everything it deems eligible within range. It may be a purely behind the scenes facility, on an otherwise ostensibly secured AP.
I don't have firsthand knowledge of TVs doing this, but other consumer devices with WiFi most definitely do this. If you don't control the software driving the TV, and the TV has WiFi hardware, I would assume it's at the very least in the cards.
It's rationalized by the vendors as a service to the customer. The mobile app needs to be able to configure the device via the cloud, so increasing the ability for said device to reach cloud by whatever means necessary is a customer benefit.
It most certainly is. It's not wifi, but it's definitely a thing. It lives down in the 900MHz world where things tend to be slower, but also travel further.
And of course: If it exists, it can be used.
That said, I haven't seen any evidence that suggests that televisions and streaming boxes are using it.
I’d kinda forgotten about it until someone mentioned open WiFi, and this seems like a use case tailor made for it. If not already, it looks like a near certainty to me.
Available as a one-time extra-cost feature on the first Kindle back in '07, Whispernet provided a bit of slow Internet access over cellular networks -- without additional payments or contracts or computers.
And really, Whispernet was great in that role.
But the world of data is shaped a lot differently these days. Data is a lot more-available and much less-expensive than it was back then, ~18 years ago -- and codecs have improved by leaps-and-bounds in terms of data efficiency.
Radios are also less expensive and more-capable compared to what they were in '07.
This will be sold as a feature: "Now with Amazon Whispernet, your new Amazon Fire TV will let you stream as much ad-supported TV as you want! For free! No home Internet connection or bulky antenna required! Say no to monthly bills and wanky-janky setups, and say yes to Amazon Fire TV!"
The future will be advertising. (Always has been, but always will be, too.)
Amazon Sidewalk is more about things connecting to the neighbor's always-plugged-in Echo Dot speaker than it is about them connecting to people walking down literal sidewalks.
As a thought exercise ask yourself would you notice if any of your closed WiFi-enabled hw scanned for APs and occasionally phoned home, if it didn't go out of its way to inform you of this? What would prevent the vendor from doing so?
Why would people even buy something like a smart TV if they know it's highly likely that it's created to spy on them? It's not a necessity, so maybe just don't get a smart TV in the first place? Otherwise, how sure you are it won't search for an open Wi-Fi or that it doesn't have a cellular connection?
Because intentionally non-smart TVs are an increasingly niche, and thus expensive market, and not a categorical upgrade from simply not connecting a smart TV to the internet, while benefitting from the manufacturer subsidy from advertisers.
Even if dumb TVs were manufactured at a cost comparable to smart TVs (at the same volume, they'd be cheaper to manufacture!), smart TVs are subsidized by the expected behavioral tracking & ad sale revenue.
Right, but the cars here now have to have some kind of GPS tracker thing built in. And the Jeans are 1% elastacine? so that they fall to bits in the Sun after 6 months. I remember a pair of real denim jeans I picked up in the states that lasted me 10 years.
Quality has gone out of everything in the last 15+ years.
So these items, along with anything marked Smart == Ad platform, or AI == Future Ad platform, are on my 'will not buy on principle' list regardless of need or wants.
Because the stereo doesn't spy on us (hopefully). If it did, I wouldn't buy one, as it's not a necessity, either.
The zipper also doesn't spy on us... yet? When smart zippers become the norm and you can't find jeans with dumb zippers, I'll return to using buttons even if they're a bit annoying to deal with.
Good luck finding a modern car that doesn't have a stereo. And continuing the analogy, good luck finding jeans without a zipper. When the only affordable and available options spy on you, it's simple enough to keep them air gapped from the internet... Electing not to own these devices at all is a much tougher sell.
> Apple is the only mainstream option where the hardware cost covers the experience, rather than your viewing habits subsidizing the device.
This might be temporarily a good rule of thumb to follow, but you will get monetized eventually. Nobody likes leaving money on the table. Same reason why subscription services now serve ads as well.
They generously offer you a free SIM card when going through passport control in Dubai. I can’t think of any other reason to do that, besides pure benevolence.
I read an article a few years ago about someone using a SIM card embedded in a product like this for free internet. The connection was severely limited though.
There already are on Sony TVs. My roommate is always connecting it when I’m away and I have to factory reset it and go through the dark pattern to use it without WiFi.
Prompt for a login or to check for updates on every start or once a week. It wouldn’t be difficult to get the numbers up for the number of online devices.
What would be the monthly cost per unit to LG for servicing those cell modems? Data-only, and I presume they could get some kind of bulk discount as a big manufacturer.
the alternative is they'll develop some common mesh local network that'll grab data through any gateway. Imagine your tv connecting to some wireless headphones which have multipoint feature enabled and connected to a smartphone which has wifi, tv sends encrypted data to buds, buds to phone and phone to some external source. Ofc it can be more sophisticated but totally doable and plausible.
Or imagine some localized automesh based on zigbee/matter-> you have a philips hue lamp connected to wifi, tv connects to it and it forwards data... I totally believe this will be the next development of ad networks and sold as 'better smart home devices'. And it'll not require any LTE. Or it can have LTE only on some subset of devices while others will use that as gateway.
probably a couple of dollars a month, which would be very tough to actually make work. Even facebook only makes a few hundred dollars a year per person in the US.
Amazon had a data deal for Kindles for a long time. If we're assuming nefariousness, the embedded SIM would only be used for analytics/telemetry not for content, so it shouldn't be too much data.
If Neilsen will give me $1 to have a journal of what I watch, they might give Samsung something to have actual logs.
If you get an Apple Tv also get the Infuse app. It is able to play anything that is in your home network - smb, plex, jellyfin.
I also recommend running iSponsorBlockTV if you use the YouTube app, it auto mutes and auto skips ads
I looked at the git page for sponsor block TV, but it’s super confusing. It’s talking about installing python and docker. You can’t do that stuff on an Apple TV right??
For me: I want something that will always work with minimal effort and is easy to use for the family.
I've farted around with every HTPC software from MythTV on and I'm over it. I'll happily pay the premium for an AppleTV that will handle almost everything in hardware.
I would honestly just use an Apple TV. But the killer feature for me (I currently use a Steam Deck/Steam Controller) is just Youtube without ads reliably. Also total control, if Youtube jacked up the prices for Youtube Red, I always have Ublock.
Total control is the name of the game for me. I can load Steam. I can load Brave. I can load VLC. I can watch any streaming, play any game (proton supported), or listen to any music.
It's just really grating to buy a nice screen and then have all the streaming services basically lock you to early-2000s picture quality. It's not that it doesn't work at all, but if I get the big nice modern screen I want to be able to use what I paid for.
Not user friendly and required dedicated hardware (TV tuners). Governing bodies also couldn't agree on HTPC standards, like Play4Sure, causing even more confusion. Plex and Sonarr/Radar are gaining some steam though.
They're great but my friends get confused when they're staying and I'm not there. Not having a normal remote throws people. Getting a remote to work perfectly and usefully in Linux isn't all that simple. Plus it's not at all easy for it to manage external inputs -- a smart TV can just switch to the ps5 with a button, how would i do that from my Linux htpc keyboard?
Don't get me wrong, I'm never giving up my ublock-YouTube plus steam plus Plex Linux htpc but there's plenty of reasons they're not super practical.
Also doesn't Netflix still throttle to 720p on PCs?
Pretty often, honestly. My friends and i all let each other crash at our places when we're in each other's town, and somebody is in my town visiting probably 3-4 times a year, and then my brother and sister come out 1-2 times a year each. So in a busy year that's almost once a month.
So enough that I'd like to find a good solution, even if it's not super high priority. My sofabaton Bluetooth remote was hopefully the savior but its Bluetooth mode is pretty bad and makes macros unreliable.
100% agree and do the same. There's no way I'd let one of those things touch the network. That is insane for a techie and even scarier that normal people live that way.
This except throw out the spyware that is an apple tv and get an intel n150 based mini pc (aoostar makes a nice one), throw bazzite on it, tell kde to auto login and auto load jellyfin and attach a flirc ir receiver and get a flirc remote for it. If you want to get fancy set a systemd timer to reboot it in the middle of the night.
> Apple is the only mainstream option where the hardware cost covers the experience, rather than your viewing habits subsidizing the device.
Years ago our refrain was "if you're not paying for the service, you're the product".
Nowadays we all recognize how naive that was; why would these psychopathic megacorporations overlook the possibility of both charging us and selling our privacy to the highest bidder?
In other words, Apple doesn't have a pass here. They're profiting from your data too, in addition to charging you the usual Apple tax. Why wouldn't they? Apple's a psychopathic megacorporation just like all the rest of them, whose only goal is to generate profit at any cost.
I'm curious how this approach manages cardinality explosion? Also, how do you handle cases where a user asks for data that requires running multiple queries, specifically where each query depends on the results of the previous one?
reply