Hacker Newsnew | past | comments | ask | show | jobs | submit | Couto's commentslogin

React was closer to elm during the times of stateless functions and redux. Since then it's getting further apart from elm.

Redux was kinda of a poor man's elm but it got the right principles. However the JS community hates boilerplate code so, new, more complicated abstractions appeared. Also, it was too easy, with redux, to shoot yourself in the foot.

Since then, with hooks, things are just getting harder and more complicated in my opinion.


Redux had some strange "features" that increased boilerplate without bringing anything worthwhile: mapDispatchToProps (I see it is gone since hooks, I could never find any sensible reason to use it the way authors wanted), global reducer, serializable state, serializable actions.

I once counted how many things you had to do to add new button with new action (it was when class components were still acceptable) - you had to change 4 files in 7 different places. This was completely insane for me. After little fighting with Redux (so I could pass nonserializable actions) I went down to 2 changes in 2 files (to dispatch action and to reduce it).

Redux is better now but I do not think its principles were that good.


Hi, I'm a Redux maintainer. A few notes here:

- It was never necessary to write `mapDispatchToProps` as a function. `connect` always supported an "object shorthand", where you passed in an object full of action creators, and we recommended that as the default: https://react-redux.js.org/using-react-redux/connect-mapdisp... . That said, yes, with hooks we just give you `const dispatch = useDispatch()` and let you use that as necessary.

- The split across multiple files was never _necessary_. Splitting code like `actions/todos.js`, `reducers/todos.js` , and `constants/todos.js` was _common_ (and admittedly shown in the docs), but Redux itself never cared about how you organized your code at the runtime level. The single-file "ducks pattern" ( https://github.com/erikras/ducks-modular-redux ) was proposed very early on and was always something users could do.

- Labeling "global reducer, serializable state, serializable actions" as "not bringing anything worthwhile" seems like a complete misunderstanding of Redux's design and purpose. Redux was created to give you a consistent data flow architecture, and the ability to make it easier to understand when, why, and how your state gets updated. Centralizing state and having consolidated reducer logic means you _always_ know "go look at the reducers" to see what state the app has, what actions _can_ occur in the app, and how the state gets updated for each action that can occur. Serializable state and actions enable the Redux DevTools, which show you the history of dispatched actions, the action and state contents for each dispatch, and the final resulting state after each dispatch. So, the design constraints are fully intentional to give you the benefits that make the app's behavior easier to understand.

Finally, note that Redux usage patterns changed dramatically in 2019, with the release of our official Redux Toolkit package and the React-Redux hooks API. RTK includes methods that simplify all the common Redux use cases: setting up a Redux store with good defaults, writing reducers with simpler immutable update logic (and getting all the action creators generated for free), patterns like async requests / reactive side effects / normalizing items by ID, and even our RTK Query data fetching layer for fully declarative data fetching and caching.

Redux is not the right choice for all apps, and there's a lot of other good alternative tools in the ecosystem. But Redux's design _is_ very intentional, those constraints were put in place to enable the desired benefits, and Redux usage today is much easier thanks to RTK.


The newish `useReducer` is basically redux but tied into react like in elm.


I'm assuming you mean `useReducer` with `useContext`, but note that they are not a replacement for Redux: https://blog.isquaredsoftware.com/2021/01/context-redux-diff...


That article makes a lot of hand-wavy assertion and makes up strange definitions. userReducer, specially with useContext does absolutely replace Redux and provide mechanism for state management. The article is, in short, just wrong.


The main point is here: https://blog.isquaredsoftware.com/2021/01/context-redux-diff...

Context updates all components while Redux and similar state management tools, because they live outside of React, do not. That's the main reason to use true state management tools over context. If you have a relatively small application or you don't care about that, continue using context. The article is also by a Redux core maintainer so you might call that biased but they know what they're talking about, since they're privy to React design decisions much more than regular users.


> Context updates all components.

All the components that consume it. Not the whole tree under the provider.


It is the whole tree, actually. That's one reason I don't use context.


That is simply not true. Only via a Consumer or useContext hook do you subscribe to hooks.

It is well documented and implemented like so.

https://react.dev/reference/react/useContext > useContext is a React Hook that lets you read and *subscribe* to context from your component.

Emphasise mine.


Definitely interesting, albeit a bit too green to be a beta version imho (I can't select a contact when adding a relationship, added two pets by accident because there was no feedback... e.g).

However, it looks really promising and more feature rich than the current version of Monica.

I do have a question thought. Will this version support WebDav/Cal? Syncing contacts to my mobile phone is the main reason why I use monica on a daily base.


Used jslint extensively back in the days.

The reason why it fell mostly in disuse nowadays is because jshint appeared, which allowed a lot more customization of the available rules being applied.

Later eslint popped up with it's plugin engine allowing anyone to have their own custom set of rules.

As far as I know, no one cared about jshint's license because there was no code being shipped, and eventually code would be minified anyway leaving no traces of jshint.

Now... We can talk about the morality of ignoring a license, but ultimately jshint's license was simply ignored by almost everyone


Coverage only indicates which parts of the codebase were touched by the test suite. A big test suite size doesn't mean high coverage.

Coverage can be increased without increasing the test suite by reducing the code base size (within pratical limits obviously)

Personally, I only find coverage as a good indicator of which code still needs to be tested, like forgetting some edge cases or conditional branches.


I get the feeling that the update framework[1] fits in here somewhere, but I can't point my finger where or how. Anyone willing to give a description on how they both could work together?

[1]: https://theupdateframework.io/


This reminds me of the server that Zoom used to have. Accepting connections only from 127.0.0.1, alone, isn't enough, since any request from the browser would match that IP, even if the request was being made through a XSS attack.

I'm sure someone with more knowledge in security would better chime in.


What I do is generate a random token, pass it to the browser I spawn, and only accept requests that include the token.


What is a good secure way to pass a random token to the browser? If the token is part of the URL, which is in the command-line, then it appears other users can see the token. What I do for DomTerm (https://domterm.org) is create a small only-user-readable html file which sets the secret token, and then does a 'location.replace(newkey)' where newkey to an http url to localhost and includes the secret token. I spawn the browser file a file: url, so teh browser can only read the file if it is running as the same user. Better suggestions?


I use chrome's devtools protocol, so I get a pipe to the browser that I can issue commands over. The token is sent to the browser via that; there is no URL on the command-line for other users to snoop via 'ps' or the task manager.


Pgadmin does something like this. And also makes it easy for you to get another URL with that token, in case you want to open the same page in a different browser.


Wouldn't proper CORS be enough? I guess you would have to avoid putting any sensitive data in GET requests


No, because CORS can only restrict which origins (scheme, domain, port combinations) are able to access the site's data. But you're not even connecting from a web origin but from localhost and you're trying to defend from all access except by your frontends. For this, you need a shared secret between the server and the frontend.

A further limitation of CORS is that certain requests are allowed even if they are not from an allowed origin.

To conclude, you definitely need a secret.


IKs there a good simple way to keep it secret?

I assume someone could have a look at the JavaScript on the browser and see hey this must be the secret stored here because it is passed to the server on every request. Then write there XSS attack to use that.


The secret would have to be non-static (not baked into the code).


I'm so fuzzy on the details but isn't this what client certs are for?


The cert couldn't obviously couldn't be static in this case (otherwise it would be trivial to get the private key).

Creating a cert during "install" probably adds a good bit of complexity (especially if the map has multiple env targets)


You could accomplish this with client certs too, sure. A random secret is a simpler solution in many ways and accomplishes the same goal, though.


One problem is that we developers are lazy and use Access-Control-Allow-Origin: * (wild star) instead of actual hostname. (eg. allowing all origins to access the backend)

No modern browser allows access to localhost without that header.

But it's still possible to forge a request using curl or whatever to bypass CORS. So as the parent post suggest - use a token of some sort.

I also recommend using a strict Content-Security-Policy to stop X-site injection attacks. (eg someone adding an image to your page/app with src="/api/cmd=rm -rf /"


That's a good trick. Thanks for the tip.


You are right, but just to give a better picture of Portugal: We still have a lot of illegal drugs coming from Morocco, it's not a solved problem yet.

When this law was approved (2001) Portugal (Casal Ventoso) was the major Europe's entrypoint for heroin and everybody knew someone who was addicted, and it was impossible to walk in any city's park and not see an used syringe on the ground.

I think what's happening in the US with the opioid crisis is a bit different that requires a different solution. Regulation vs Decriminalization.


That's not the same than being decriminalized.

In Portugal, if you do that the police will ask you for your ID if it's a light drug like cannabis, or take you to the police center if it's an heavier drug. If you keep being caught by the police a judge can order you to take psychiatric help.

Being decriminalized means that you won't be arrested when you ask for help, and you have access to clean syringes to avoid diseases and infections.

Help being provided, it's not the same than being legal.


It seems from this that Portugal is less decriminalized than SF? Which also has clean syringes but the cops will let you shoot up or smoke in peace.


While what you're saying is true, I think that's why the world is afraid to adopt Portugal's policy. If getting halfway there is worse than doing nothing at all, it's a very dangerous thing to try.


Just my opinion:

In my experience, when people say drugs they think of cannabis, but cannabis was never the major problem in Portugal.

Our problem was heroin. I think no country will legalize heroin, at least not in the short/medium term.

This is clearly working, one of the major entrypoints of heavy drugs in Europe (Casal Ventoso in Lisbon) is almost clean of problems, there are still heavy drug users of course, but most of the associated violence disappeared.

Even portuguese people's mindset shifted from "junkies" to "people that need help".

So even if you consider a "halfway there" because you're thinking about legalization, this is still working and it's a good middle term to help society shift their mentality.

Disclaimer: I'm talking from my experience as a Portuguese person, not hard numbers. I remember walking in my hometown parks (which is a very small city) and see syringes and drug users everywhere when I was a kid. This doesn't happen anymore.


Heroin does seem to be a problematic drug.

I read that only 10% of users get addicted. But are there places where heroin is used where it doesn't become a problem?

(I am from the country where the film Trainspotting is set).


> In 1971, as the Vietnam War was heading into its sixteenth year, congressmen Robert Steele from Connecticut and Morgan Murphy from Illinois made a discovery that stunned the American public. While visiting the troops, they had learned that over 15 percent of U.S. soldiers stationed there were heroin addicts. Follow up research revealed that 35 percent of service members in Vietnam had tried heroin and as many as 20 percent were addicted—the problem was even worse than they had initially thought. [...]

> Lee Robins was one of the researchers in charge. In a finding that completely upended the accepted beliefs about addiction, Robins found that when soldiers who had been heroin users returned home, only 5 percent of them became re-addicted within a year, and just 12 percent relapsed within three years. In other words, approximately nine out of ten soldiers who used heroin in Vietnam eliminated their addiction nearly overnight.

From https://jamesclear.com/heroin-habits


True, I had heard that before, but its not really what I was asking. Your example still has 10% addicts (the same number that I mentioned). Did these people become problematic to their wider community? The 10% who become addicted have caused plenty of problems in my city.


Certain countries like Portugal, Estonia and so on, already emit digital certificates stored inside the chip of every citizen's ID card. I believe these cards are made by Gemalto[1]

[1] https://www.gemalto.com/govt/identity


Out of curiosity, how did you manage to have Mozilla as a location provider?

I hope i'm wrong here, but as far as I can tell. You need either google services or unifiedNLP from microG to achieve that.

I say this, because I run LineageOS without Gapps and I could never make it work with Mozilla's location provider.

The lack of geolocation is the only thing bothering me to be honest.


Mozilla UnifiedNLP Backend (no GAAPS) installed via F-Droid.

[1] https://f-droid.org/en/packages/org.microg.nlp.backend.ichna...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: