Following the news that the leading LIDAR provider had selected their computer chips to power its technology, Xilinx’s Senior Director of Automotive Business, Willard Tu, explains what it means to the company and how crucial AI is and will be.

Can you tell me about the company’s history and whether LIDAR is a new avenue for you?

We started in the infotainment space, where some of the SOCs (software-on-chips) vendors weren’t able to drive the display. We were like the companion chip in many cases. As the market grew some of those SOC vendors began to incorporate some of what we did and the business started to trend down. We are still very useful and have a presence on the higher end in displays but that business shifted and we really pushed into ADAS (Advanced Driver-Assistance Systems) as the primary market where we see a lot of growth.

The reason we did that is that the market is innovating and constantly changing. If you look at infotainment, even though that’s still growing, it’s not growing leaps and bounds – the LIDAR space is. Some of the products were eight channels not that long ago and now they’re all the way up to 128 channels. Some of our competitors have attempted to make a device to fit surround view, forward-looking cameras and they may even try LIDAR in the future but the reality is it’s in a constant state of change.

On top of that, AI techniques are changing all the time. Not that long ago everybody was using 32-bit floating-point data structures for their AI, then they all transition to integer, 32 and eight-bit integer. Even Xilinx technology has a white paper out there that talks about using a four-bit integer. Then many of our top customers are doing mixed mode, with eight-bit integer quantization and even four or two-bit quantization.

These techniques are brand new. Not only are the applications themselves expanding in scope and capability but the AI techniques on top of that are changing. LIDAR is no different. And in all the ADAS categories, Xilinx is either number one or number two. Because of that, it’s been an area of strength for us in understanding, and applying machine learning techniques, as well as doing the application itself.

Can you tell me more about LIDAR?

People generally want to make two product classes of LIDAR. There’s point cloud, which is a raw data out, meaning I’m streaming a lot of dense data over Ethernet. Yet, not every vehicle has a strong Ethernet networking capability. If you have a CAN (Controller Area Network) output for your LIDAR, then you’re providing object-oriented data, which is what most radars do. That means your sensor is smarter and it processes the point cloud in the sensor itself and sends you the output.

That output is a minimal amount of data being transmitted and it tells you if there’s an object. If the object’s 100 metres away, it converts the amount of data transmitted to very small amounts of data like if it’s a human. Whereas with the point cloud, I’m sending a lot of data points out there, and some other device is doing that perception process.

The Xilinx devices are fully capable of supporting just the raw data out point cloud or including the capability to do the classification. Then you can see where that impact is, on the AI front, using different techniques.

Could you run through some case studies or examples of where exactly the technology is being applied?

If you’re talking about the application of LIDAR, it’s got a lot. Obviously, it’s in automotive, whether it’s robot taxis or ADAS. You’ll also see LIDAR in security systems. You’ll see them in robotics, just as another way to do very accurate, fast ranging measurements. That’s because cameras have a challenge with range and you can use stereoscopic vision but it’s more inferred and not a precise measurement. With LIDAR you have the ability to get a very precise measurement because it’s measuring the time of flight of the light beam.

There’s also industrial warehousing, where again, LIDAR technology is being used to scan an area and help navigate robots to go pick and place packages. Security, warehousing, industrial automation and even UAVs (Unmanned Aerial Vehicles) are natural areas for LIDAR technology. There’s just a limitless opportunity for this new technology. One of the challenges is that everybody wants it to be cheaper, more affordable and that takes a little time as the volume goes up.

Many tech leaders are saying that over the past five years AI has driven nearly everything forward including automation, machine vision, computer vision and robotics. Do you have the same view?

I absolutely agree. If you think about it, whether it’s robotics and industrial or ADAS features, it’s great to have a sensor. But what it’s essentially telling you is AI is the ability to make that richness of data mean something. If I have a point cloud, I’ve got a picture. So what? Unless I’m the human reading it, I don’t know anything. Now I can add machine learning to it the machine has been able to say, “Oh, that’s a human. That’s a dog. That’s a paper box on the road and you can run over that”. If you have a big paper box and you’re just using radar, it might say, that big paper box is a hazard and I’m going to stop but then you’re like was it worth it?

The one thing I think consumers don’t want is a lot of what we call false-positive detections, where it did detect something that might have been correct, but it’s not something I need to take action on. That becomes very annoying. If every time you saw an object on the road as a pop can, or a plastic bag floating in space, and your car suddenly decides to come to a stop, you’re going to be annoyed as you just paid thousands of dollars extra for this capability, and it’s just not performing. That’s another reason why I think people need to have a lot of different sensors.

Even though those sensors are fantastic technology what do developers really want? They want the output of what those sensors do. Now, the robot taxi space is a little bit different. Those guys say, “I need the data and I need to figure out what it is myself”. They really want to be in control. I think a lot of the ADAS developers that are just trying to put some advanced safety features in want to simplify output. They want to know what’s in front of them, how far it is and if they have to take action.

That’s why the CAN output is a lot friendlier to them as it’s a lot less data transmission and they don’t have to worry about having a much newer communications architecture inside their vehicle. On top of that, a much more advanced communication architecture like Ethernet is going to require a lot more cost in silicon as well. The Ethernet Chip is also probably going to be a different cost structure today so Ethernet is not pervasive in every node and the CAN is. The market will adjust to that as well.

Why do you think Velodyne choose you as their provider over other competitors?

It’s the ability to have a single device to drive their whole LIDAR and to be able to scale across multiple products. Scalability is a major issue. Nobody wants to pick one product and use it for one version of LIDAR and a different product for another version of LIDAR. Being able to stay with one family is very, very important to them. That way they get software re-use.

The other part is thermal. Sometimes you might have a DSP (Digital Signal Processor) or GPU (Graphics Processing Unit) and it could probably do the work but these are what we call  serial pipelined engines. Think of them as a series of different chains of processing you have to do. With those serial processing engines, the difference between the GPU, DSP and a CPU (Central Processing Unit) is that I can vary the data size going into the pipe, but I’ve still got to go through all these different stages. With that, the signal processing becomes fairly complex if you’re doing a lot of data processing.

Instead, with the FPGAs (Field Programmable Gate Arrays), we can parallelize them. Think of it as having multiple – maybe 50 – GPUs or CPUs and DSPs that are all parallelized so that you can process that. In essence, instead of just using a portion of the silicon at a time, with FPGAs, we can use the whole chip and that really helps them keep the thermal issues down at a lower thermal constraint.

So thermal efficiency and scalability are our primary types. Then we have the ability to use our programme logic to make customer AI signal processing chains that are probably unique to them. A lot of LIDAR companies will experiment with different types of signal processing. I bet you they’re still doing it so that they could even potentially launch a product and then in the field, reprogram the signal processing chain so it’s like software upgradeable except they’re using software to fix their hardware. So that way it’s a brand new signal processing chain, like, “Hey, we can get better results by doing these types of techniques”.

You can imagine, if you’re at a pace of rapid innovation, you may have something that you think is pretty good today and then six months later, you’ve found a new technique. Again, you don’t want to call your customer up and tell them they’ve got to buy brand-new LIDAR for that – that’s going to be annoying. I think for all those reasons, Velodyne thinks Xilinx technologies gives them the ability to reprogram it in the field and maybe even reconfigure the hardware, not just the software. We provide the scalability to go from one product to another so they don’t have to redo all their brand new software.

I also think Xilinx technology gets a bad rap since we might be a little bit higher in what we call a “static current”, which is when devices aren’t operating. However, when it comes to “run time current”, which is when the devices are operating, we’re actually much more efficient. That’s why we use the term efficiency. We’re much more efficient than many of our competitors who brag about peak performance and stuff like that, but they can’t get there on a regular basis. That’s a big advantage for us because efficiency translates into lower power.

I watched a conference where Andrew Ng, formerly of Google and Baidu, was speaking about building different AI and whether it’s the quality of data or the quality of the algorithm that matters. Which one would you favour, data or algorithm?

The reason why I think technology is valuable is that we play on both sides of that. One of my colleagues uses a concept he calls “data gravity”. Data has an area where it’s going to be located. Is it going to be on the edge? Is it going to be centralised? In certain generations and technology, it’s always difficult to move data. Like we talked about earlier, you might have to expand the capability of your vehicle network from CAN to Ethernet. Well, that’s going to cause a lot of change.

A lot of older vehicles want to stay on the edge, on the bumper using CAN data. That tells you the gravity of data is very high, it’s hard to move and it’s stuck right with the sensors. Over time, the technology advances and like I said, Ethernet technology will come in and we’ll be able to move that data.

The part about AI, you need to have flexible AI capabilities. Some of the best customers we have know how to use programmable logic in a very unique way. I have videos that I’ve shown many customers and I’ll show them and I’ll say how many TOPS? (Tera Operations Per Second) and many of the competitors will brag about having a very high TOPS number. I look at that and go, that’s pretty cool, everybody loves to have more capability but with more capability usually comes more cost.

I’ve seen where some of our AI customers are able to use a device with less than one TOP. They can do amazing things with it. People just don’t realise that that is the secret sauce of programmable logic and they can basically make a custom neural net that’s very efficient. Again, you’re not going to do that using the old techniques. Many of these older techniques were utilised with what we call traditional computer vision, which is more like edge processing, trying to recognise and pattern match and all those types of things. That takes a lot more horsepower.

This is where CNN (Convolutional Neural Network) really opened up everything and that doesn’t require as much processing power. So that’s where I think the secret sauce is in the AI and the techniques are evolving so rapidly. If you tried to use techniques from three, four years ago, you’re probably not going to get the results that you want today. For example, our AI team shows a slide that talks about dog classification algorithms. Six years ago, they would say the most exciting classification algorithm was about 60% accurate. Now they’re all the way up to 80% and more. It just shows you the evolution of how things move forward.

Are you having to convince people to adapt to LIDAR technology?

It’s not really about convincing them to adapt, it’s about convincing them to see the value of the technology. You know, it does cost money – everything does. The cost numbers are already coming down quite a bit. We know that we see our prices coming down because the volume is starting to go well. When the volume starts to increase, the volume goes up and then it gets more and more attractive, and the pricing can kind of scale.

To that earlier point that we talked about, the ability to do the AI portion of it is the one that unlocks the potential. Let’s say you want a UAV and you want to scan the ground to help you figure out where you can land, that could be a LIDAR technology. But if I can’t apply it for that purpose, then what are my options with the sensor? Can I do classifications? Can I do detections? What can I see? How far I can see? All these things add value to the technology.

One of the big things that LIDAR has is it can do very long range. Many car makers like it because they can see 200 metres, maybe even see 400, 500 metres out. That’s a huge advantage. Again, it depends on the intensity of the beam because if you increase the intensity you can see further but it might have some safety issues. That’s the value of the technology.

When people see the potential of saving a life because they can see further out, that’s important for people to cover more and more cases. As I said, I think consumers have a low tolerance for false positives. If things don’t really work when they spend from $20,000 to $100,000 for a vehicle, they’re going to get pretty annoyed if it doesn’t work perfectly. That’s the expectation and we all know technologies aren’t perfect. Even our cell phones aren’t perfect. There are a lot of things that go wrong with technology, but you want to minimise that aspect of it.

That’s one reason I think Xilinx technology is adaptable and flexible. It helps address the imperfections over time because we can reprogram our device and its hardware. That just gives another dimension to the designer. If you’re limited by software only, there’s only so many things you can fix. If I can fix the hardware so to speak, I could just adapt the hardware. It’s just another dimension in the designers’ toolbox to fix problems.

Is there anything else you’d like to add?

I think the potential of a LIDAR is huge. It’s a growing market and you can see the frothiness of the market. There’s so much money being injected – you see the stacks, you see the acquisitions, you see how many companies are still existing, still pioneering in this particular space. I think that speaks to the potential. Investors aren’t stupid people, they want money and something’s going to have a return. That alone is a testament to what’s to come and how much potential there is in that market.

This Q&A originally appeared in Automate Pro Europe magazine 2. All information was correct at the time of publication.

You can find more information about Xilinx on its website

Stay up to date with the most recent automation, computer vision, machine vision and robotics news on Automate Pro Europe, CVPro, MVPro and RBPro.

Related Articles

Sign up to the MVPro Newsletter

Subscribe to the MVPro Newsletter for the latest industry news and insight.

Please enable JavaScript in your browser to complete this form.

Trending Articles

Latest Issue of MVPro Magazine