We do a tremendous amount of testing to ensure real-world reliability, and our customers' results bear that out. Full functional safety certification is slated for end of this year, which means it's already well underway.
We make a point of this because legacy spinning lidar is unreliable. But it's unreliable because of the analog design, not because spinning is inherently unreliable.
This seems dubious to be honest. Moving parts break, simply due to mechanical wear at the very least. Gyroscopic forces for example from the spinning motion is less than ideal for drones.
I realize a solid state lidar may be a very challenging prospect but it would be a huge selling point!
If the device is reliable then you should quote a FIT number. A very good VCSEL based transceiver in an indoor environment has a FIT of about 100 at Tj~65C and a CI of 60%. If we assume your FIT rate is similar (it won’t be because your operating conditions are more difficult) and have 128 of these devices your FIT rate ~12800 (assuming independent failures). This puts your MTBF at around 8.9years.
Some transceivers have a FIT rate of ~300FIT so if that’s the case your rel will only be 3 years.
This is the spec for a cold start. If you give it a warm start, you can operate it much lower than -20C! For instance, it's being used in underground mines in Scandinavia without issue.
The issue will not be at cold but at high temperature. VCSELs have very poor efficiency at high temperature and it’s possible to operate them where increasing current reduces light output. In a vehicle application the temperatures are very high and humidity can also be very high and condensing.
This couldn't be further from the truth. You can design the VCSEL cavity and top and bottom mirrors for peak efficiency at any temp, including very high temps. I wonder what we did...
Compared to the side emitter diode lasers used in legacy spinning lidar, VCSELS are cheaper, more efficient, more reliable, longer life, and better quality light sources to boot.
Unfortunately the gain falls as function of temperature so you also get a lot less light and you have to pump harder (more current). So while it’s possible to somewhat compensate somewhat with the mirrors the device still has this behavior at high temperatures as the device self heats. This behavior is widely documented in the literature.
VCSELs have a smaller current aperture and the current density is higher than in an edge emitting laser. As the reliability is a function of the junction temperature and the current density, VCSELs operating at high temperatures have significantly reduced lifetime compared to an edge emitting device due to the high current density.
See for example slide 5 which shows how lifetime scales as a function of temperature and current density. For high reliability your devices need to have low current density.
This has a single moving part - a brushless motor that turns the turntable. It's rated for over 100,000 continuous hours of operation, and passes automotive shock and vibration standards.
There's a good explanation in the post about what we mean by digital lidar, but the tl;dr version is we use silicon CMOS chips for lasers and detectors vs analog components like side emitting lasers and APDs used by legacy lidar providers.
Solid state is a bit of a buzzword, and most "solid state" lidar sensors actually have small, delicate moving parts inside. Solid state sensors are aimed primarily at consumer vehicles, which are still many years away.
The benefit is (at least in theory) easier integration into the vehicle fascia and (again, in theory) higher reliability vs legacy spinning lidar, which are quite unreliable in the real world.
Ouster's digital lidar sensors are much more reliable than the legacy analog spinning lidar sensors, and much more compact - and therefore easier to integrate.
While this is a ~80% discount on other 128 beam sensors, it's unfortunately still out of reach for the hacker community. We absolutely plan to get prices down to an affordable level for individuals in well under 5 years!
Also, Ouster runs a sponsorship program that gives deeply discounted or free sensors to cool projects. If you have a cool idea, shoot me an email: derek.frome at ouster dot io
Might be interesting to add the Ouster sensor to our sensor simulation [1] to give people the ability to play around with the data even if it's outside the price range?
Oh, this is interesting! I've been putting together a 6-Kinect rig to take a 3D scan of my body as I go on hormone treatment and an exercise routine, monitoring subtle changes over time.
Does it support Kinect v1 and changing the orientation using the built-in motors?
I also have a few projects using photogrammetry reconstruction of convention booths using 2D images. I've been interested in adding in lidar/pointcloud cameras...
"But if Tesla ultimately succeeds, it won't be because it's easier to achieve full autonomy without lidar than with it. It will simply be because Tesla began large-scale data collection from cameras long before other carmakers.
In short, the fact that Tesla backed itself into a corner by promising customers full autonomy without lidar doesn't prove that other companies won't find lidar helpful to their own self-driving efforts."
A great summary. The only thing it misses is that lidar is getting more and more like a depth camera. SPADs can sense ambient light and create 2D images that are perfectly correlated to 3D images, making it possible to apply 2D algorithms to 3D data [1].
Tim is one of the best informed journalists on lidar - and this is a pretty solid summary of where the leading companies are (although Luminar continues to be incredibly misleading).
Ouster | San Francisco,CA USA | Full-time | On-site
Role: Embedded Linux Engineer, C++ generalist, DevOps Engineer (among others - ouster.io/careers)
Product: Publicly we design and manufacture high performance LIDAR sensors that outperform the products from velodyne at much lower cost. The system we've developed has all of the core aspects of an AV - real-time sensor fusion, localization/state estimation, HD map generation, and a realtime perception stack for semantic scene segmentation, object tracking classification and decision making, but many of these modules are in their early stages. We have developed a crowdsourced 3D mapping product that we've been deploying on customer vehicles with the goal of 3D mapping the earth. Product is already shipping to fleets, ride share, and car companies. 65 person company in The Mission, San Francisco.
Hi Derek. Thank you for mentioning us here. The age-old conversation on transport protocols is evergreen. The goal of these protocols is to allow data transmission from one device, to other devices. It is great to be on the leading edge of technology, continuing the innovative optimizations of the use of the internet. As the internet scales, more devices need connectivity and each byte counts more and more. There are 20 billion connected internet devices today (2018). That's 2x more than the number of humans. The device to human ratio continues to grow. We need efficient and affective methods for coordinating information between devices.
The various methods to coordinate information between devices on the internet should be looked at objectively and mechanically. At the end of the day, the modern reliable messaging protocols use the IP Frames. The basis of our internet. Layer 6 protocols based on IP Frames are not equal. Many methods are not compatible with the various configurations of networks. The bytes and bandwidth required between each production-ready method differ.
* HTTP/1.1 - 100% Compatibility
* HTTP/2.0 - 100% Compatibility with client initiated connectivity and backward compatibility with HTTP/1
Each message received using these mechanisms requires TCP ACKs. The promise of MQTT and WS leads you to believe that the data streaming to your device over WS or MQTT don't require ACKs. However this is not how TCP works. When packets are received there is an associated timeout and retransmission when an ACK is late or missing. Additionally light-weight layer 6 traffic is required to maintain connectivity between two endpoints. Otherwise LRUs and quotas are triggered for routes could be treated as stale, and therefore dropped altogether. This is the underlying mechanism of the layer 6 protocols that are often left out of the discussions.
There is a clear winning approach in my mind. HTTP/2.0 includes, by default, server-initiated data push. The required TLS and header compression, as part of the spec, allow for a secure yet efficient streaming solution. With HTTP/2.0, TCP socket limits are less of a concern, as the client only needs one TCP socket to subscribe to an unlimited number of data feeds. HTTP/1 requires the client to maintain separate sockets for each independent stream, as HTTP/1 enforces head-of-line ordering for muxing. Something we've done special for HTTP/1 clients, we've added multiplexing by allowing multiple topic subscriptions and filter expressions to be passed in a single HTTP call on the same socket. This isn't natively built into HTTP/1 and is supported on all our SDKs.
This is why we have chosen HTTP/2.0 as our next-gen transport protocol. We have started by providing HTTP/2.0 connectivity at our edge for select customers. As of 2018 PubNub is the world-record holder for the largest online concurrent event in human history using HTTP/2.0 for live data streams on a globally celebrated sporting event.
You should be using HTTP/2.0 for your customers. Here is a dockerfile that makes it easy for you to start testing HTTP/2.0 - https://github.com/stephenlb/http2-proxy