FAQs on RFNav Sensors, Tech, etc.

AV sensing and safe navigation begins with high resolution crisp imagery that can "see" long distance, day or night, in clear or adverse weather. One big elephant in the room is that imagery produced by lidar and cameras are severely degraded in adverse weather.

In contrast RFNav's Kijughz radar sensor, produces sharp high definition, fast frame rate, 4-D voxels (x,y,z and Doppler) and 3-D imagery in all weather conditions. Our images have lidar comparable spatial resolution. Our low-cost sensor enjoys a 3000:1 wavelength advantage over Lidar for weather penetration.

RFNav's tech is more than the imaging sensor itself. Our tech includes a suite of intelligent algorithms that solve hard problems such as multiple sensor fusion, image artifact removal, image de-blurring, automatic calibration, dense scene tracking, object identification, RF domain mapping, and much more.

Today's AV radar systems are low resolution (fat beams) with limited imaging capabilities. Traditional imaging requiring movement (Doppler) takes time. The resulting radar images are slow to form and coarse.

In contrast, RFNav's Kijughz Radar sensor supports stationary real beam imaging as well as non-stationary Doppler Beam Sharpening/SAR/ISAR image products. In simpler terms, high definition, fast frame images, honest in both angles, are generated for AVs both at rest and in motion.

The RFNav's Kijughz Radar sensor performs imaging and scene interpretation in fog, dust, smoke, smog, haze, rain, and snow where Lidars and cameras fail

The sensor has a Boolean aperture architecture comprised of two external components. One part is a thin, low-profile, horizontally oriented strip, the second is a similar, but shorter, vertically slanted strip. The Boolean aperture mounting locations have some flexibility; one example is illustrated in Figure 1 below.

Figure 1. Kijughz Radar Sensor on AV with lidar comparable resolution.

The RFNav's Kijughz Radar sensor is scalable. One model has the following lidar comparable spatial/range resolution and frame rate at 77 GHz:

Aximuth beamwidth < 0.24 degs
Elevation beamwidth < 1.1 degs
Range Resolution < 0.2 meters
Frame Rate > 2 kHz

Aximuth < ± 70 degs
Elevation < ± 55 degs
Range Limit < 250 meters

A simulated Kijughz radar single fast time frame image product for a stationary car at 42 feet from a stationary AV is shown in Figure 2. Notice that the image conveys high precision localization of the vehicle’s contour, edges and size including such details as the passenger mirror, bumper, trunk, wheel wells, and rear window trims.

Figure 2. Single frame, fast time, real-beam image example of a stationary car.

In Figure 2, the Kijughz radar's 4-D voxels, 3-D plus Doppler, are projected down to the 2-D plane. The car was modeled as a sparse collection of single point scatterers. Simplifying assumptions included no multipath, no multi-bounce, and no vehicle interior scatterers.

Some MIMO 2-D array designs have sparse T/R allocations compared to conventional 2-D electronically steered arrays. Unfortunately, large MIMO arrays with Lidar comparable resolution, Figure 3, are expensive and consume significant cross-sectional area. The large MIMO arrays have additional difficulties including maintaining calibration in the presence of array vibration.

In contrast, the RFNav Boolean aperture architecture yields a lower cost, more flexible, design (Figure 1) compared to a cost optimized 2-D MIMO array. The Boolean aperture has fewer T/R modules and a more robust reference signal distribution. The result is a lower system and installation cost, with simpler auto-calibration for internal motion compensation.

Figure 3. Approximate size of a MIMO aperture with lidar comparable resolution.

Some metamaterial antennas perform phase steering by changing the dielectric properties of each cell in a large array of cells. Analog beam forming is realized with a feed network and appropriate beam steering. These arrays typically have a single beam port and a single T/R module resulting in low cost/power. Disadvantages include slow imaging frame rates, difficulty with interference or large signal return cancellation, difficulty with real-time calibration, and difficulty with single pulse short to long range beam compensation. Further, to obtain Lidar comparable resolution the metamaterial aperture dimension occupies significant cross-section area, >0.9 meters wide by >0.2 meters tall. The latter is a challenge to locate on AVs (Figure 4).

Figure 4. Approximate size of a metamaterial aperture with lidar comparable resolution.

In contrast, the RFNav Boolean aperture architecture supports fast frame rates, adaptive interference and clutter cancellation (STAP), real-time calibration, single pulse variable range beam compensation, with minimal cross-section resulting in a low cost, more subtle, installation with an overall low system cost (Figure 1).

Luneburg lenses are analog beaming apertures. The classical Luneburg lens is sphere with a dielectric constant that decreases with increasing radius. The result is that multiple simultaneous beams can be formed (Figure 5) at relatively low cost/power compared to a classical electronically steered phased array.

Figure 5. Illustration of focus of plane wave to one beam port by a classical Luneburg lens.

While other Luneburg lens topologies include more volume efficient cylindrical and disk-shaped topologies with non-radially symmetric dielectric distributions, the beamwidth is still limited by the aperture’s physical cross-section. Other challenges with Luneburg lenses include difficulty with interference or large nuisance signal cancellation, difficulty with real-time calibration, and difficulty with single pulse short to long range beam compensation. Like the metamaterial and MIMO antennas, the aperture field of a classical Luneburg lens required to obtain Lidar comparable azimuthal angular resolution has large cross-section area, >0.9 meters diameter (Figure 6).

Figure 6. Approximate size of a classical Luneburg lens with lidar comparable resolution.

The Rotman lens aperture is another analog beam forming device supporting multiple simultaneous beams (Figure 7).

Figure 7. Illustration of Rotman lens topology and plane wave focus to one beam port.

Some of the Rotman lens challenges are similar to the Luneburg lens including difficulty with large interference or nuisance signal return cancellation, difficulty with real-time calibration, difficulty with single pulse short to long range beam compensation, relatively high insertion loss, and the need for a large area to satisfy the aperture field of 0.9 meters by 0.2 meters for Lidar comparable angular resolution.

In contrast to both Luneburg and Rotman lens, the RFNav Boolean aperture architecture supports adaptive interference and clutter cancellation (aka STAP), real-time calibration, single pulse variable range beam compensation. The aperture’s lower cross-section and independent Boolean architecture enable more flexible mounting options with a lower system and installation cost with Lidar comparable angular resolution (Figure 1).

AVs have severe multi-mode sensor fusion challenges. As an example, cameras, lidar, radar, and ultrasound each have different propagation and scattering characteristics, voxel dimensions, sensor errors, and calibration drift. Some sensors may be in-complete (e.g. 2-D) while others may be 3-D. Further, as a group, the collection of sensors typically do not share a common phase center or base-line.

Sensor fusion can operate in many domains including pre-detection, post-detection, pre-track, and post-track. The fusion process may also work at different feature extraction levels from primitive (wavelets) to higher level features such as geometric shape bases to abstract levels including object identification.

One of RFNav’s sensor fusion algorithms operates at the early pre and post-detection processing stages, close to the sensor hardware. The algorithm develops fused N-D voxels from multiple incomplete (2-D) sensors and/or complete (3-D + Doppler) sensors with different physical locations. The remarkable aspect of RFNav's technology is the elimination of almost all of the fusion errors, aka fusion "ghosts."

A simulation example showing fusion of two incomplete (2-D) sensors that do not share a common baseline is shown in Figure 8. In this example each sensor is a 2-D radar each with different spatial resolutions or voxel dimensions. Each sensor gets one fast-time single pulse look at a set of single point scatterers arranged in the shape of the letters. Both pre and post-detection sensor data is presented to the fusion algorithm.

Figure 8. Example of low level 3-D voxel fusion for two 2-D sensors.
Left, traditional method corrupted with ghosts. Right, RFNav sensor fusion.

On the left is an image using a traditional fusion method. The dark dots that are not co-located on the letters indicate fusion ghosts. Downstream multi-look tracking and subsequent AI are compromised by these ghosts. On the right is the RFNav fusion algorithm result. Almost all ghosts are eliminated. The handful of ghosts remaining fall outside the constant down range cell where the letter targets are located.

AVs operate in dense scenes with many targets with much divergence in radar dynamic range. For example, a small child standing next to a parked car may appear as a weak target, masked or obscured, by the processing sidelobes of the large signal from an adjacent car door. The sidelobes result in image "blurring," functionally a loss of resolution and an increase in image entropy, creating false signals.

RFNav has developed some new algorithms, a mixture of signal processing and machine learning, that reduces the spectral domain "blurring" that occurs in conventional radar processing.

To illustrate the crispifying capability of the RFNav machine, the mmWave signal returns from a set of single point scatterers in the shape of the "RFNav" letters in height and range was simulated. Data was formed for a single fast time look. The scatterers have a distribution of radar cross section spanning 80 dB dynamic range. In this scenario, very strong signals can appear side by side with very weak signals. A Hann weighted power spectrum of the signal set is shown on the left side of Figure 9. The middle shows a conventional method to reduce sidelobes resulting in an average blur of 1.66 resolution bins. On the right is the resulting image after being processed by RFNav's algorithm, resulting in an average blur reduction to 0.99 resolution bins.

Figure 9. Left, power spectrum. Middle, conventional image, blur=1.66 bins.
Right, image from RFNav algorithm, blur = 0.99 bins.

Summarizing, AVs need RFNav's de-blurring algorithms to help mmWave imaging sensors see deep into sidelobes in a single frame. The result is improved detection of weak targets and reduced fast-time image entropy aiding downstream fusion, feature extraction, and object identification. Our algorithms will also reduce today’s non-imaging radar range and Doppler blurring.