Smart cities: lidar sensing in intelligent transport systems

Article By : Nitin Dahad

One aspect of the smart cities agenda is the deployment of intelligent transport systems. A pilot project using lidar sensors at the San Francisco...

One aspect of the smart cities agenda is the deployment of intelligent transport systems. A pilot project using lidar sensors at the San Francisco Municipal Transportation Agency (SFMTA) has demonstrated how lidar has provided a solution for the city’s smart traffic signals pilot, a part of San Francisco’s Vision Zero policy.

The objective of the city’s Vision Zero policy is to improve road safety, since it is thought that every year about 30 people lose their lives and over 200 more are seriously injured while travelling on the streets of San Francisco. The objective of the smart traffic signals pilot within this larger framework was to explore the use of use of multimodal intelligent traffic signal systems (MMITSS), dedicated short range communication (DSRC), transit signal priority (TSP), and emergency vehicle preemption (EVP) technology to provide priority to emergency and transit vehicles. In addition, the MMITSS should also be able to detect pedestrians and bicycles to provide them leading intervals, scrambles, and/or protected phasing.

The first proof of concept which ended in January 2020 deployed lidar sensors at five intersections and demonstrated the ability to accurately and anonymously profile data, with 96% accuracy. The second proof of concept expects to add the data layer to the signal control network to enable the ‘intelligence’ in ITS; this is under way and expected to finish in early 2021.

We spoke two people involved who shed some light on the technology, its deployment and the results obtained, and here we present highlights of the conversations. First, we spoke with the lidar sensor technology provider Quanergy’s chief marketing officer, Enzo Signore. Then we dug deeper into the actual proof of concept project with Paul Hoekstra, who was the independent strategy execution consultant for the project with SFMTA.

Lidar: tracking IDs of objects anonymously

Enzo Signore of Quanergy explains the benefit of lidar technology in this type of application, requiring people and vehicle counting and flow management, and especially with the ban on facial recognition.

The key value proposition for lidar technology in applications like stadiums and smart cities is the ability to anonymously track an object travelling over various sensor points. For example, a car would go through many intersections, or a pedestrian will go through many areas.  What Quanergy can do is to assign an ID to an object, and that ID will stay with the object throughout the journey in the area being monitored.

Quanergy basics of lidar
Lidar is a time-of-flight sensing technology that pulses low-power, eye-safe lasers and measures the time it takes for the laser to complete a round trip between the sensor and a target. The resulting aggregate data are used to generate a 3D point cloud image, providing both spatial location and depth information to identify, classify, and track moving objects. (Image: Quanergy)

This is very complex to do, because when you go through multiple intersections, you need multiple sensors and multiple servers for the edge compute. Most technologies would have a siloed view of only the area they are managing, and when crossing the boundary between one area and the other, the ID would be lost, and you’d be given another ID. With this approach, you start losing track of all the flow of people.

We have a technology called automated ID handover, that will pass the ID of the person or the vehicle, from one area to the other. So as long as we have field of view, the same ID will stay with the object. This gives very good end-to-end visibility and tracking. This can be important for airports, for example from kerb to gate, where you could optimize the passenger experience, and for shopping centers and cities. The single ID for each individual helps with enabling end-to-end analytics.

Quanergy’s sensors, the M- series, provide long range detection, such as the MQ-8, designed specifically for flow management applications. Here is how these sensors are different. Typical lidar sensors have a symmetric beam configuration. If you mount the sensor flat, then typically half the beam will go to the sky and half the beam will go to the ground. If it’s mounted 3 meters high on a street lamppost looking down for the pedestrian view, then in this configuration, half the beam is wasted.

In our design, all the beams are actually pointing down, which gives the ability to have a symmetric coverage of the ground. That means there are no blind spots when a person is walking across the field of view. This gives the ability to track without interruption a person or vehicle anywhere in the field of view. We can see an object up to a range of 70m (ie: 15,000 sq. m.). This is a very large area which would otherwise need many cameras to achieve a similar coverage. Hence this reduces the number of sensors and also the cost.

Overcoming privacy issues related to facial recognition

Paul Hoekstra, for SMTA, describes the thinking behind the implementation and the outcome of the first proof of concept (PoC) over five intersections on 3rd Street, and plans for expanding the coverage.

We started working with SFMTA, Cisco and Quanergy as the partners on this project in April 2019. Initially as part of the Cisco package we had DSRC sensors. We found we were using them just to listen to all the cars in the corridor and on the freeway that we were covering. We found that less than 1% of all cars actually broadcast that DSRC signal. From this use case perspective, the conclusion has to be that you cannot use DSRC for traffic flow measurement. It’s just not significant enough to make decisions on.

At this point in time, we’ve now completed the first (PoC) with Quanergy sensors, and now we are in the middle of the second PoC.

With the first PoC, we took 20 lidar sensors, installed them on five intersections on 3rd Street, near the new basketball stadium that opened last year. We had edge compute with Cisco TRX running the Quanergy QORTEX software. Data from the lidar goes via the TRX box and the Qortex software publishes the data to the network, which will go to the data center, a little VM cluster running Cisco Kinetic platform which stores all the messages – all 30 million a week.

Every Sunday reports are published on it, one for the vehicles, identifying the vehicle by the lidar ID at the intersection, with a whole bunch of attributes, like time, day of the week, where did it come from, where did it go, how often was there a stop, how long was the stop, what was the speed, was there an event (from the event calendar). In this way, we could connect all the intersections and follow vehicles through the corridor. And then we could say things like, “this is the amount that entered northbound at the southside of the corridor, and then how many turned off, and so on”.

With Quanergy’s QORTEX we have calibrated it and have achieved 96% accuracy. You can’t just count IDs with the lidar; you have to build logic to ensure that the same ID is in the egress as in the ingress. With that logic we can follow the cars through the intersection. We have logic that defines what a stop is. Hence that 96% accuracy is where we ended up, it is very precise. For pedestrians, with the zones that we defined, you can see whether the person is on or off the kerb. You can see if a person is inside or outside the boundaries of a crosswalk. You can see how close a car was to the person. With that kind of data, we can create reports of near misses. We’ve defined what near misses are – the vectors, the velocity, and then calculate the time for them to collide, and whether it is in a certain range, then you call it a near miss.

This is only possible because the data coming from QORTEX is so precise, that we can see without ever having to identify anyone. We are not storing any identifiable personal information. A person is just a dot. And a car is just a block, you don’t know what kind of car it is. We classify based on size.

SFMTA Intelliegent transport system pilot Paul Hoekstra
In the second proof of concept, 10 sensors cover a larger transport corridor in San Francisco, along 3rd Street between Channel and 20th Street (Image: Paul Hoekstra)

The first PoC was all about analysis. In the second one we are expanding the number of intersections. So now we have five, and we are going to go to 10. In other words, a larger corridor.

We’ll then layer in all the data that is currently locked inside the cabinets. In the cabinets there is a signal controller, and on the controller there are many actuators. These could be loop detecting a vehicle, a pedestrian pushbutton, or sensors in the light rail track. There is traffic signal prioritization. All of this sits in the embedded signal controller.

So what we are doing now is enabling two way information exchange with the signal controllers, taking all the data from the intersections, such as the lidar data and object classification (again completely anonymous), at the platforms and the bus stops. The object classification of those sensors (which is all processed on the sensors), will give us the number of people, as well as their classification – for example, is there somebody in a wheelchair, are there people pushing a stroller, or are they having a bike. Many of these factors will determine the dwell time of the transit vehicle. We want to know the predicted dwell time based on how many people are there.

From the back-end system we’re going to grab the number of vehicles. And then with the analytics we can determine whether we need say 20 seconds of dwell time or 32 seconds of dwell time. We can then extrapolate all the 10 intersections,

Treating the whole transport corridor as a network
To optimize the whole corridor, we can’t do it without knowing precisely where people and vehicles are and for how long. This means we treat the whole corridor as a network, not as an individual node. This means we run through the algorithms with high frequency, and now we are debating whether we need to go faster than 1 hertz, we recalculate everything every one second.

Then we actually tell the signal controller, you shall go green on northbound. That closes the loop. Learning from the supply chain work from Cisco, you know that that is the only way you can move stuff through the intersection. All the technology is available, but it only optimizes all the siloes. In this way, we’re taking a huge leap forward with a new paradigm, of integrated traffic management.

There are already 7,000 cameras in San Francisco. But cameras only give you a 2D picture. The precision of the location is less accurate than what you can achieve with lidar. Lidar always works, in rain, at night. And it stays very far away from the privacy issues. The moment that people know they are being tracked, or they can be recognized, there’s then the issue of people not having trust in the government to protect them.

The outcomes of this project are to enable emergency vehicles to have priority when dealing with emergencies, optimizing transit timings and stops, and even platooning cars up if no public transit available, to move them more effectively through the corridor.

Leave a comment