AV sensing and safe navigation begins with high resolution crisp imagery that can "see" long distance, day or night, in clear or adverse weather. One big elephant in the room is that imagery produced by lidar and cameras are severely degraded in adverse weather.
In contrast RFNav's Kijughz radar sensor, produces sharp high definition, fast frame rate, 4-D voxels (x,y,z and Doppler) and 3-D imagery in all weather conditions. Our images have lidar comparable spatial resolution. Our low-cost sensor enjoys a 3000:1 wavelength advantage over Lidar for weather penetration.
RFNav's tech is more than the imaging sensor itself. Our tech includes a suite of intelligent algorithms that solve hard problems such as multiple sensor fusion, image artifact removal, image de-blurring, automatic calibration, dense scene tracking, object identification, RF domain mapping, and much more.
Today's AV radar systems are low resolution (fat beams) with limited imaging capabilities. Traditional imaging requiring movement (Doppler) takes time. The resulting radar images are slow to form and coarse.
In contrast, RFNav's Kijughz Radar sensor supports stationary real beam imaging as well as non-stationary Doppler Beam Sharpening/SAR/ISAR image products. In simpler terms, high definition, fast frame images, honest in both angles, are generated for AVs both at rest and in motion.
The RFNav's Kijughz Radar sensor performs imaging and scene interpretation in fog, dust, smoke, smog, haze, rain, and snow where Lidars and cameras fail
The sensor has a Boolean aperture architecture comprised of two external components. One part is a thin, low-profile, horizontally oriented strip, the second is a similar, but shorter, vertically slanted strip. The Boolean aperture mounting locations have some flexibility; one example is illustrated in Figure 1 below.
The RFNav's Kijughz Radar sensor is scalable. One model has the following lidar comparable spatial/range resolution and frame rate at 77 GHz:
|Aximuth beamwidth||<||0.24 degs|
|Elevation beamwidth||<||1.1 degs|
|Range Resolution||<||0.2 meters|
|Frame Rate||>||2 kHz|
|Aximuth||<||± 70 degs|
|Elevation||<||± 55 degs|
|Range Limit||<||250 meters|
A simulated Kijughz radar single fast time frame image product for a stationary car at 42 feet from a stationary AV is shown in Figure 2. Notice that the image conveys high precision localization of the vehicle’s contour, edges and size including such details as the passenger mirror, bumper, trunk, wheel wells, and rear window trims.
Some MIMO 2-D array designs have sparse T/R allocations compared to conventional 2-D electronically steered arrays. Unfortunately, large MIMO arrays with Lidar comparable resolution, Figure 3, are expensive and consume significant cross-sectional area. The large MIMO arrays have additional difficulties including maintaining calibration in the presence of array vibration.
In contrast, the RFNav Boolean aperture architecture yields a lower cost, more flexible, design (Figure 1) compared to a cost optimized 2-D MIMO array. The Boolean aperture has fewer T/R modules and a more robust reference signal distribution. The result is a lower system and installation cost, with simpler auto-calibration for internal motion compensation.
Some metamaterial antennas perform phase steering by changing the dielectric properties of each cell in a large array of cells. Analog beam forming is realized with a feed network and appropriate beam steering. These arrays typically have a single beam port and a single T/R module resulting in low cost/power. Disadvantages include slow imaging frame rates, difficulty with interference or large signal return cancellation, difficulty with real-time calibration, and difficulty with single pulse short to long range beam compensation. Further, to obtain Lidar comparable resolution the metamaterial aperture dimension occupies significant cross-section area, >0.9 meters wide by >0.2 meters tall. The latter is a challenge to locate on AVs (Figure 4).
Luneburg lenses are analog beaming apertures. The classical Luneburg lens is sphere with a dielectric constant that decreases with increasing radius. The result is that multiple simultaneous beams can be formed (Figure 5) at relatively low cost/power compared to a classical electronically steered phased array.
In contrast to both Luneburg and Rotman lens, the RFNav Boolean aperture architecture supports adaptive interference and clutter cancellation (aka STAP), real-time calibration, single pulse variable range beam compensation. The aperture’s lower cross-section and independent Boolean architecture enable more flexible mounting options with a lower system and installation cost with Lidar comparable angular resolution (Figure 1).
AVs have severe multi-mode sensor fusion challenges. As an example, cameras, lidar, radar, and ultrasound each have different propagation and scattering characteristics, voxel dimensions, sensor errors, and calibration drift. Some sensors may be in-complete (e.g. 2-D) while others may be 3-D. Further, as a group, the collection of sensors typically do not share a common phase center or base-line.
Sensor fusion can operate in many domains including pre-detection, post-detection, pre-track, and post-track. The fusion process may also work at different feature extraction levels from primitive (wavelets) to higher level features such as geometric shape bases to abstract levels including object identification.
One of RFNav’s sensor fusion algorithms operates at the early pre and post-detection processing stages, close to the sensor hardware. The algorithm develops fused N-D voxels from multiple incomplete (2-D) sensors and/or complete (3-D + Doppler) sensors with different physical locations. The remarkable aspect of RFNav's technology is the elimination of almost all of the fusion errors, aka fusion "ghosts."
A simulation example showing fusion of two incomplete (2-D) sensors that do not share a common baseline is shown in Figure 8. In this example each sensor is a 2-D radar each with different spatial resolutions or voxel dimensions. Each sensor gets one fast-time single pulse look at a set of single point scatterers arranged in the shape of the letters. Both pre and post-detection sensor data is presented to the fusion algorithm.
AVs operate in dense scenes with many targets with much divergence in radar dynamic range. For example, a small child standing next to a parked car may appear as a weak target, masked or obscured, by the processing sidelobes of the large signal from an adjacent car door. The sidelobes result in image "blurring," functionally a loss of resolution and an increase in image entropy, creating false signals.
RFNav has developed some new algorithms, a mixture of signal processing and machine learning, that reduces the spectral domain "blurring" that occurs in conventional radar processing.
To illustrate the crispifying capability of the RFNav machine, the mmWave signal returns from a set of single point scatterers in the shape of the "RFNav" letters in height and range was simulated. Data was formed for a single fast time look. The scatterers have a distribution of radar cross section spanning 80 dB dynamic range. In this scenario, very strong signals can appear side by side with very weak signals. A Hann weighted power spectrum of the signal set is shown on the left side of Figure 9. The middle shows a conventional method to reduce sidelobes resulting in an average blur of 1.66 resolution bins. On the right is the resulting image after being processed by RFNav's algorithm, resulting in an average blur reduction to 0.99 resolution bins.