This is an instructive article for how to get a lot done to improve systems as a "utility knife" developer in a growing startup, but how to do it responsibly.
I think that was obvious from at least 10 years ago. Airbus’s bet at the time raised a lot of eye brows, while boeing’s didn’t. The only thing Boeing screwed up on was outsourcing too much of the assembly.
> The only thing Boeing screwed up on was outsourcing too much of the assembly
Selling planes is complicated. For the same reason the F-35 sources random things from practically everywhere [1], Boeing may have found it advantageous to have suppliers in the countries of national airline purchasers.
The F35 program's decision to source parts from nearly every state in the US is a political one, not from a logistics issue. Boeing finds it much easier to convince Congressmen to continue the program if jobs would be lost in their state if the program was cut.
> Boeing finds it much easier to convince Congressmen to continue the program if jobs would be lost in their state if the program was cut
Just as Boeing might find it easier to convince Singapore Airlines to buy its planes if its government knows jobs would be lost if demand for the plane is insufficient.
The comment you're replying to is saying Selling planes is complicated. Not building them. Convincing congress to continue the program falls under selling.
Boeing’s decision to outsource internationally on the 787 was more of a political move (well, foreign customers buy planes you know) than a technical one. Surely they would have outsourced some parts, but not the wings and the tail!
International supply chains are the rule rather than the exception. The 787 pushed the novelty envelope further than it needed to with the amount of composite materials used (it's possible that in time, this will look like a smarter bet as it'll be easier to reengineer if competing with future generations of aircraft where composites are commonplace) which as one of the factors delaying launch, and also had major issues with engines and batteries which were always going to be supplied by third parties.
Dynamic Yield’s unified customer engagement platform helps marketers increase revenue by automatically personalizing each customer interaction across the web, mobile web, mobile apps and email. The company’s advanced customer segmentation engine uses machine learning to build actionable customer segments in real time, enabling marketers to take instant action via personalization, product/content recommendations, automatic optimization & real-time messaging.
As a member of the Customer Success Team, your main objective is to assist with project delivery, maintaining a high level of satisfaction for our customer.
What you need to succeed:
– You will have a technical degree as well as working experience in a professional services/consulting environment of an enterprise software company or consultancy. Self-taught hackers are also welcome.
– Knowledge of online marketing functions is an asset, though not required. You will be comfortable responding to the varying demands of working for a dynamic, international company.
– Bonus if you have experience working with online Publishers or eCommerce.
– You are a self-motivated individual, with well-developed inter-personal & communication skills and a strong desire to succeed.
Required Skills:
– Fluency in JavaScript skills and experience with jQuery.
– HTML / CSS and web technologies across different platforms.
– Business analysis and understanding of the digital marketing space, especially in the field of online marketing.
Hi, I work at Placemeter, the company that produced this map and the technology behind it. Check out our privacy principles [0] and feel free to email me [1] if you have any questions. Thanks!
The one liner is right below the larger headline on the landing page, perhaps we should've made it more prominent: "The Placemeter Sensor measures pedestrian, vehicle, and bicycle traffic in real-time."
We are looking for computer vision engineers from entry level to experienced level, to extend, develop, and maintain our algorithm stack.
+ You will design the next generation of computer vision algorithms
+ You will optimize and deeply understand these algorithms and scale them
+ You will design and maintain the quality assessment tools required to make sure
our algorithms perform well
We use computer vision at a massive scale, on a large number of rich and ubiquitous video feeds, to understand what is going in in the physical world in real time. We measure how busy places are, what people do, how fast cars go, and much more. We offer that data to developers, citizens, cities, and retailers, radically changing the way they interact with the physical world.
ABOUT PLACEMETER
Placemeter uses computer vision algorithms to create a real time data layer about places, streets, and neighborhoods. Placemeter’s technology gives businesses, cities, and people the ability to take a place’s pulse.
I am a PM at Placemeter. We do not "use facial recognition software" in our algorithms, as Density's website claims of video-based systems. None of our algorithms use biometric markers for our counting—we're essentially the same "dumb" counters as Density's IR with the added advantage of accuracy and area of coverage.
Placemeter does much more than pay lip service to privacy. We pride ourself on our privacy efforts. If you want to lear more about them, @afar email me: david@placemeter.com
I am curious. Particularly with your new sensor. It sounded like much of the video processing happens on the unit itself, meaning faces never reach Placemeter servers. Is it accurate to say that you're only getting counts and movement data?
My other question is how you derive count without uniquely identifying someone. If I'm entering a shop and a PM sensor sees me, will it know when I leave?
1) the processing is happening aboard the sensor, counts are what are sent back to the servers
2) we don't do unique identification like that. We use object detection, which is different than using unique biometric markers like face detection. That means that we can track a person or a car within a frame of view, but not if they exit and re-enter the frame like in the case you described.
Does Density still use wifi pinging for part of its counts?
That's really interesting. Given a certain level of granularity, would it be possible for a person to have a unique object signature? I guess at that point, you'd just use a face. Just curious.
No wifi pinging. After Apple almost killed us a year ago with their MAC address policy change and we realized there was significant push back on privacy, we dropped the technology altogether.
Not quite sure I understand your question. We store counts, not individual object IDs, so at an individual granularity it would be the same as your IR device counting one person.