Hacker Newsnew | past | comments | ask | show | jobs | submit | TomFrost's commentslogin

I love the Remote SSH extension for VSCode (save for the frustrating workarounds necessary for ssh-agent forwarding -- an absolutely necessary feature) and expected VSO to be much more streamlined. But I find myself hitting early walls:

- No documentation on how to clone a private github repo, gitlab repo, etc.

- Cloning a private repo on the command line eschews the ability to bootstrap your VSO instance using the in-repo config, which kills a huge benefit of this product

- No documentation on forwarding ssh-agent or injecting RSA keys of any kind

There are some other needs addressed in other comment threads (particularly registering a remote headless box as a VSO machine) but the above are instant showstoppers. Perhaps this works with private Azure DevOps repos because of the login integration? I'd be willing to wager that the majority of folks interested in this are on other repo hosts, though.


Semantic recently adopted my team's React adaptation as their official React port. It's lighter weight, eliminates jQuery, and all components are standard React components that can be extended or dropped in as-is.

https://react.semantic-ui.com/


Thanks for your work on porting Semantic UI to React. I've used it in an internal application and it was a breeze to work with.


I've been using your react version with a personal project for a while and it's been really helpful, so thank you for your work.


When I click that link I get a blank page with a spinner saying "loading docs" for several seconds before anything actually loads... and it's just a bunch of static text content.

Why does it take several orders of magnitude too long to load? Makes me very concerned to implement it in any production application of my own.


I might have to try this out, too bad I didn't see this yesterday.

Settled out with trying http://ant.design/ which has actually been pretty nice as well.

Seems most React components libs are material design and I can't stand the look.


Ant looks really good as well!

It has a nicely designed DatePicker [1]. Semantic UI doesn't currently have an official implementation of a Date Picker (although there are unofficial versions on GitHub).

[1] https://ant.design/components/date-picker/


I've been using react semantic ui for the last week and am thinking of switching to ant. Or going a la carte for everything.

Ant is good at making the css importable module by module using webpack. Semantic ui could adopt this approach too.

Have you had mobile problems with Ant? Seems a touch buggy there


I've been looking for a replacement for Bootstrap since it's been alpha for ages and it doesn't look like there is any interest in an official React implementation. So this might be it, I'm going to give it a try :)


Have you used React-bootstrap at all? I've used it in the past and it's not bad. And is relatively well maintained.

https://github.com/react-bootstrap

However, there is the bigger problem that bootstrap feels more and more like a sinking ship.


Yes I did, thanks, I've also checked out: https://reactstrap.github.io They're both building on top of the v4 alpha, which feels like a sinking ship indeed. Though it could all change ofcourse, they are also updating the official themes to v4 so maybe we'll see an official release this year after all. But then again, it might be too late ready...


Why does Bootstrap 4 feel like a sinking ship?


Because v4 has been in alpha since august 2015, and v3 hasn't had significant updates since 2015 either.


Over the years I've gotten an incredible amount of value out of bootstrap and 3 has been great but the uncertainty over 4 and the huge delay has made me start seriously looking at alternatives over the last few months.

Semantic and Foundation are both on my list of things to look at.


The react port is top notch. I'm switching my app to it.


and well documented, thank you for that ! using it right now.

Though it is for a private project. I'm not sure I will use it on a professional job due to the size of the css. I spend a lot of time importing things and I'm never sure how much weight and runtime complexity is being added.


Thank you for the hard work done here! We make extensive use of your library, and love how well maintained and easy to use it is.


Can it be used with project that initiated with `create-react-app`?


Yep! I actually am using it in the sample project for my "Practical Redux" tutorial series: http://blog.isquaredsoftware.com/series/practical-redux/ . Also using it at work.


ECS is free in that you pay only for the EC2 nodes running your containers -- there's no need to host ECS or do scheduling on your own hardware to use it. It's also Availability Zone-aware right out of the box, making sure the distribution of container instances is optimized for durability. Finally, it's fully managed. No one needs to maintain or upgrade your ECS implementation.

Granted, there's a lot of advantage to building on top of an infrastructure that can be installed on any hardware from any provider. However, we're not talking about rewriting your applications if you need to move away from ECS; it's all still the same containers. Going from ECS to Mesos or Kubernetes when needed is a matter of writing new config files.

It's a very attractive proposition for small teams on AWS who are trying to spend minimal time on ops.


For one, hitting the Github API to put failed build markers against commits and PRs. CodeBuild doesn't appear to have the same Github integration that most other CIs do out of the box.


The problem I've always had with audio interfaces is that input is not private. Requests on public transportation are heard by many. Requests at home turn into a conversation with the roommate, spouse, or children. Requests walking down the street make others question the mental wellbeing of the person talking to him/herself.

I'm reminded of the Ender's Game sequels in which the protagonist wears a small earpiece with an AI named Jane. He communicates with Jane by "subvocalizing" -- mentally saying the words, physically barely uttering a sound. The AI understands.

A few years ago there was a TED talk (forgive me; unable to find the link) on which a technology was demoed to do something similar. Sensors placed around the throat, combined with EEG sensors around the temple, allowed a man to transmit text to a computer by following all the mental and muscle processes of speaking, stopping short of moving his lips in an obvious fashion or making sounds. The sensors allowed the computer to translate their input to actual words.

Perfecting and miniaturizing that technology, then combining it with an in-ear AI, would be a game changer.


Yeah, you can always find a use case where something won't work then spend your time discussing why a solution won't work for you.

However, I imagine that a lot of people would love the solution in many scenarios. Personally, I have no problem asking my phone a question when I'm walking down the street or driving. Basically, it would be nice to have the imperfect solution now then take it from there.


Agreed regarding the issues with audio interfaces.

I'd personally love to experiment with a device with a Google Glass-type display, eye tracking to function as a virtual cursor, and a compact ring-like device to enable interactions with that cursor (such as tapping, dragging gestures, etc).


The future is brain-to-text, where our thoughts get piped directly to Alexa/Siri/Cortana/Now

http://www.kurzweilai.net/brain-to-text-system-converts-spee...

No need to utter words in public.


Maybe. I support brain-computer interface research, but I don't think you should be so certain that it's the future.

Imagine you could stimulate the brain directly, what would that look like? If it's visual input, it's going to look a lot like stimulating the visual cortex, right? So you're suggesting we will invent some sort of biomedical device that can project an image onto some region of cortex. That would be neat!

But isn't there already a device that can project an image onto a region of cortext? There is! It's called the eye! What is really gained by stimuating the visual cortext directly, vs just having, say, an eyebrow piercing that just does the same thing by coming in through the eyeball?

Well, ok, maybe you'd like to get a little more bandwidth than the eye can provide? But I'm not sure that really makes sense. The eye takes in a LOT of bandwidth, and most of the time, unless we are fighting a bear or trying to run a football through a bunch of bodies, we're not even using all of the bandwidth.

Maybe you want to be able to look at the world and get input subconsciously without messing up your beautiful view. But you can just dedicate a some small number of "pixels" of your visual field to non-visual data. If your brain can't integrate that data with the visual field, it will just become a blind spot, and it will be as invisible to you as your current blind spots are.

Really, our perceptual system could be much more powerful, but beyond a certain point there's no reason to have more bandwidth because the existing stuff is all a brain of our size can really synthesize in realtime. You could add additional inputs to your brain, but your brain will just grow around them, dulling your connection with other parts of your body. That's not necessarily bad, but, again, why not just use those parts of your body for the input?

The planet Earth has spent about a billion years trying to design the perfect input/output interface for the human brain and it's done a pretty good job. I'm not saying we can't improve on it, I'm just saying it's going to be hard to do any optimizations at the hardware level. And we don't really need new hardware to plug in to computers, because our brain is already happy to remap itself to virtual inputs! Pretty much all of the things people imagine doing with a BCI can be done more efficiently, and less obtrusively with augmented reality.


As a deaf dude who has spent his whole career working on software I see it as a software problem.

Despite all the advances, we still don't know how to encode sign language.


If you send the input through the eye it would still appear in your regular field of view, right?

But what if you want to keep looking at whatever you're currently looking at without obstruction?

If you project the input directly onto the brain you could have two separate images, similar to how you can create a picture of, say, a bear in your head while still looking at your computer screen.


Imagine you could stimulate the brain directly, what would that look like?

It needn't 'look' like any of our senses. Direct brain-computer interaction could take the form of an internal dialogue, similar to how a person might mentally rehearse both sides of an anticipated conversation in their head.


OK! So you're talking about sound data then. We also have a really great input device for that.

How would the direct neural input be better than hearing?


It is not sound data. It is words without sound. There is no tone, no pitch, no volume. It is just words. Thoughts. Concepts. Ideas.

Hearing has so many drawbacks. It is limited in range. It is affected by environmental noise. It is linear and if you miss a word you need to have it repeated somehow, which may not always be possible. Thoughts have none of these limitations.


Infinite volume with no hearing damage?

I've never been able to find a satisfactory answer on how the cochlear implant behaves like that. I somewhat suspect nobody with hearing who developed it actually knows.



If we have the interface why stop at brain to text? Why not make the computer a part of our brains?


>If we have the interface why stop at brain to text? Why not make the computer a part of our brains?

This is how the Borg came about.

Actually I read about some guy who has a chip implanted which allows him to use his otherwise unusable hand. Right now it requires a bank of computers to function in the lab.

Paralyzed man with breakthrough brain chip plays guitar video game, swipes credit card

http://wexnermedical.osu.edu/blog/new-tech-helps-paralyzed-m...


Two major reasons: upgrades and security.

Do you want to get the shiniest, greatest interface of 2027 installed, only to discover in 2035 that after three upgrade cycles, your hardware is obsolete and won't be supported?

Do you want to get the best-tested, most upgradable interface of 2027 only to discover in 2028 that you were one of the hundred people who were hit by a zero-day exploit and a bot-net has root in your head?


where our thoughts get piped directly to Alexa/Siri/Cortana/Now

...and around the same time, they'll probably also figure out how to pipe adverts directly into our brains. I guess you could call that the scary side of "seamless" HCI.


Maybe, but that has to be a long way out. I can see what the parent describes happening well before brain-to-text.


I subvocalize when I read and it feels like I'm breathing the words. I think my vocal cords move too.


Jane! Just bought my little sister Speaker For The Dead. \m/


You are forgiven in the lack of (Ted Talk) reference, oh wise Hackernews commenter.


If the goal is solely Docker images with a standard size in the 20-40MB range, this can be achieved without additional tooling. After switching our development and deployment flow to docker, my team quickly tired of the 200-400MB images that seemed to be accepted as commonplace. We started basing our containers on alpine (essentially, busybox with a package manager) or alpine derivatives, and dropped into that target size immediately. Spinning up 8-10 microservices locally for a full frontend development stack is a shockingly better experience when that involves a 200MB download rather than a 2GB one.

This is in no way a negative commentary on Nix; it looks like an interesting solution to a well-known problem.


Same here! Switching to Alpine for most services was essentially painless. To go a step further, the images with binaries that have no dependencies (mostly programs written in Go) use scratch Docker images. This way we get 5MB images, where the size overhead of Docker is nothing.


I have found that images with a single executable and perhaps /etc/passwd with no other files prevents to use docker exec as a valuable debugging/poking tool. My preference is to have a single image with all the services and basic tools included and use it to run all the containers on the machine.


We have an open source solution called Cryptex[0] to handle this. It's better explained by this blog post[1] that gives the thinking and configuration necessary for most scenarios.

[0]: https://github.com/TechnologyAdvice/Cryptex [1]: http://technologyadvice.github.io/lock-up-your-customer-acco...


Depending on the data you're storing, you may be responsible for HIPAA compliance. Such a thing is possible on AWS[0], but is not provided out-of-the-box.

[0]: https://aws.amazon.com/compliance/hipaa-compliance/


I'm not in the US (though I've looked at the HIPAA guidelines anyway in the course of my research), I'm in the UK and will only be storing UK data (at least initially, I suspect there is strong demand for the idea but I'm a) not planning on making huge amounts of money b) supporting other countries since the laws on medical data are so varied), I spoke to friends in local government who put me in touch with the people who deal with storing medical data for them, as long as I follow best practices, make sure that users are aware of the license terms of using the system and behave ethically that (appears) to be all that is required except for of course obeying the rules on DPA/PII (Data Protection Act, Personally Identifiable Information), as I'm not a public organisation their rules don't apply (though I'm still going to follow all their guidelines anyway).

I'm still going to speak to the company solicitor though just for belts and braces.

Oh and on the hosting, I won't be using any cloud services, Physical server out of a a state of the art DC a few miles up the road that is certified to my UK Gov standards as a provider, they pretty much tick every box I'd ask for though not cheap I can get an insanely powerful machine and they have a superb reputation, looking at approx 75 quid ($110) per month for a dual core i3-4160/8GB RAM w/1TB RAID or £145 ($210) a month for a Xeon 1231 with 32GB RAM and 2TB of RAID storage (that one has dual power supply, n/c) which if it's used isn't that expensive at all.


Is there a basis for such an assumption?

For an organization requiring the highest available security, the ideal solution would be a privately operated hardware security module kept off the DMZ. However, that, as well as the idea of self hosting (and maintaining) the entire dev, test, deploy, and prod stack suggested by another commenter, isn't always within reach of a small, agile team looking to focus on their core competencies.

One could argue that it's possible for Amazon to have falsified the description of KMS as an HSM, or the certifications[0] they were granted for it, but I'd retort that an organization in a position to seriously question those claims shouldn't be using a remote solution anyway.

So, making the more rational assumption that such claims by Amazon can be trusted, their offering is quite secure: the HSM does not allow the export of any key, and exposes only the ability to load encrypted data into the device and have it produce the decrypted result over a secure channel, and vice versa.

[0]: https://aws.amazon.com/kms/details/#compliance


I said it above, but I'll reiterate here that Amazon KMS does not use HSMs; they don't provide a lot of detail to help you reason about what that implies for key security. (I agree that there's no reason to believe they're lying or that it's backdoored.) There's also not much discussion about where the authorization checks happen, and the security of key operations is only as secure as the entity to whom that is delegated.


Re: your first line, yes: the existence of https://aws.amazon.com/govcloud-us/pricing/ -- and we know how the US Gov feels about computers.


This is certainly the case, however for an organization implementing best practices for code deployment, such a change would have to be peer-reviewed in the best case, or pushed directly to master with an obvious paper trail in the worst. It wasn't my intention to imply that employing well-designed envelope encryption would shut the door on any possibility of an engineer gaining access to secrets; clearly there's a lot more involved in making that happen. However, this goes a long way to allowing the source of any leaks to be traced should they occur.


Presumably, your rogue employee won't follow best practices, and there is not a quality audit trail for such abuse in most setups. I think we're in agreement: this is a hard problem and difficult to solve. In your article, that part of the problem statement is a red herring, as Cryptex doesn't solve it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: