Hacker Newsnew | past | comments | ask | show | jobs | submit | purplecones's commentslogin

Awesome. I love that you have a section showing exactly what CSS properties are used to create each letter.


Hover over a letter's class to highlight the rendered stroke.

https://yusugomori.com/projects/css-sans/fonts


This is great material. Bookmarked!


I love this! I use the Terminal and VIM every day and really enjoy tools that fit within that workflow. I didn't even know about this project when I started building a similar tool (with the same name!).


I have a relative path set up for my jrnl: "this": "./notes.txt"

which I've aliased: note=jrnl this

if I'm doing something that I get pulled away from it's a quick "note was working on the flux capacitor in somefile" .gitignore the notes file as desired


Can you share more about how you set up relative paths? Is that a bash alias?


Sorry I completely missed that bit! The relative path part is set up in your jrnl config usually ~/.jrnl_config


You might want to look at vimwiki if you haven't already. https://github.com/vimwiki/vimwiki


I love these bite-sized tips! Are there more of these somewhere?


Hey can't change it due to the S3 issue. See their twitter post: https://twitter.com/awscloud/status/836656664635846656


Wow so they can't update the S3 status page currently due to S3 issues, including the status page processing to update it, which runs upon S3.

That raises many more questions about how well accounted outages have been in the past and equally reported. Then the design aspect that in itself highlights if you run things in the cloud, what fallback do you have if that goes wrong. So certainly the impact from this outage is going to echo for a while, with many questions being asked.


Smells BS. As pointed out in https://news.ycombinator.com/item?id=13757284, text should have reflected the real situation. So the icons are either not the only problem or just an excuse.


It seems status pages should be on entirely independent infrastructure, give the criticality of the information they provide. Perhaps even a separate domain.


Hi this is great! I have some questions.

I'm curious how you decided to model the data in your Neo4J database. How did you do the 'Suggested Readings' section? How does the cipher query look that drives that.

How do you like using AlchemyAPI? Is it doing all the NLP stuff for you?


Alchemy is doing all the NLP. Each article is extracted for concepts and entities (as defined by Alchemy in their documentation). I normalize each term that is extracted in order to prevent duplicates (there are some duplicates that still sneak through so it still requires a little bit of data maintenance). So the way this looks is that their is one node for a term say "Machine Learning." In one article "Machine Learning" is a concept with a negative sentiment and high relevance and another article it is an entity with low relevance but positive sentiment. The relationships house the sentiment and relevance properties: (machine_learning)-[relevance,sentiment]-(article).

The suggested readings sections pulls the most relevant concept of that article and finds connected articles with the same concept at a high relevance. This way suggested articles are more than just key word hits. It's all about relevance. I'm still continuing to tweak this query and there's a lot more that can be done with it such as matching sentiment and emotion. As the dataset grows I'll look to add a feature that pulls a list of articles based on a cluster of highly associated entities.

As for Alchemy, I've tried a number of different NLP APIs and, in my opinion, none of them have come close to matching Alchemy's accuracy. It does make mistakes but at a low enough level that it's easy to manually correct.


Thanks for the background. I'm working on a similar project but currently parsing news articles using a collection of specific rss feeds and calling Google's NLP API with the text. It sounds like AlchemyAPI seems be a better fit in this case.

How are you finding Neo4J is handling the scale of reading and writing all these stories? I've had a positive experience so far but I'm only in the few thousands range.


Neo4j handles read/write seamlessly I have found, but I'm only around 10,000 nodes and 20,000+ edges. I've heard use cases for Neo4j in the range of 50M+ nodes. My position on this is not whether Neo4j can handle it but whether your code and infrastructure can.


We were missing the good old `meteor deploy` days so we decided to create `meteor-now`. Should be the easiest way to deploy your Meteor app!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: