Breakthroughs In Research & Development
RFNav is a R&D consultancy specializing in the creation of new technologies and intellectual property for solving ‘impossible’ problems. We are the minds behind today’s most disruptive technologies, from concept to fruition.
Use Our Ideas
Effective solutions to demanding problems
Varied and extensive experience is our greatest strength – a strength that makes our team uniquely qualified to drive projects from ideation and on through research, prototyping, and marketing.
Our track record spans decades of innovations in radar engineering and imaging, ultrasound imaging, prediction, detection, signal processing, artificial intelligence, tracking, sensor fusion, algorithms, object recognition, battery research, and sensor problems.
We have worked extensively in electronic warfare, jamming and countermeasures with companies such as Raytheon, Lockheed Martin and Mitre.
It is rare to find a firm with the depth and breadth of RFNav’s expertise.




Farragut Square, Washington, DC
Developing Tomorrow's Technology
In Partnership With You
We build solutions that are “impossibly” precise, able to discern hundreds of separate objects in opaque liquids, traffic, urban environments, and crowds – in any weather, including fog, snow and rain.
We are innovating essential technology, enhancing battery energy storage, and solving complex problems. And we’d love to enable your vision for the future.
RFNav : Subject Matter Experts In Thinking Outside The Box
The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.
In dealing with the future . . . it is more important to be imaginative & insightful than to be one hundred percent "right."
Our Build Partner, RaGE Systems
RFNav proudly partners with RaGE Systems for hardware builds, enabling us to make the theoretical truly practical in the business sense. RFNav and RaGE happily and vigorously bring your prototype to life, optimized for mass production.


Russ Cyr
Managing Partner
Russ is an expectation exceeder. He has built RaGE from the ground up to deliver demanding high technology prototypes on time, on target, and within budget. With Russ behind your project, you’ll always know you’re not only meeting short term goals, but building a long-term foundation for manufacturing success.
He has over 30 years in semiconductors, wireless and specialized sub-systems for commercial, industrial and defense applications. Russ is in constant search for the best; the best people, the best ideas, the best components, the best partners and ultimately the best product design he and his team can achieve. As a result, RaGE and RFNav clients have grown to expect thrilling products. We hope you will, too!
He has over 30 years in semiconductors, wireless and specialized sub-systems for commercial, industrial and defense applications. Russ is in constant search for the best; the best people, the best ideas, the best components, the best partners and ultimately the best product design he and his team can achieve. As a result, RaGE and RFNav clients have grown to expect thrilling products. We hope you will, too!
RFNav Presents A Novel Solution To Enhance LI-ion Battery Longevity & Safety
A unique solution in lithium battery research that solves the problem of dendrites and battery degradation, leading to Lithium batteries for electric cars that don’t need to be recycled!
Benefits of RFNav’s battery innovation with ultrasound:

Current Art

RFNav
- Pack $ / kWH
- Volume
- Weight
- Cycle Life
- Safety
- Life Cycle Cost
Related RFNav Patents #11158889, #11380944, #11950956
Imaging Tools To Build Profitable New Products
RFNav’s radar and ultrasound imaging extracts beautiful, information rich, 4D imagery from previously intractably-dense scenes using your existing sensor’s raw data and applies new artificial intelligence (AI) and machine learning (ML) processes to it. As a result, RFNav’s imaging technology represents a monumental improvement in the efficacy of AI/ML techniques.

Current Art

RFNav
RFNav Patent #9,953,244
RFNav sensor fusion algorithms and artificial intelligence effectively removes a majority of the tricky artifacts that trouble modern fusion algorithms – resulting in clean imagery, with minimal artifacts, from your sensor suite.

Current Art

RFNav
RFNav Patent #9,903,946
Kiju Radar Technology For Autonomous Vehicles
RFNav’s surprisingly affordable Kuji Radar has Lidar-like resolution, producing crisp image boundaries of numerous complex objects at challenging distances and in all weather conditions including snow, rain and fog.

ITAR Export Cleared
RFNav Patents #9739881, #9983305
Simply put, RFNav’s Kiju Radar produces absurdly detailed and distinct imaging data under any conditions.
Two Example Problems
- (A) Detect & locate the young girl in the scene
- (B) Classify the detection as a young girl

- Kiju : Yes
- Lidar : Yes
- Camera : Yes
Which sensors can handle these scenes?


Which sensor can handle these scenes?

- Kiju : Yes
- Lidar : No
- Camera : No
Kiju Radar App: Smart City Street Intersection Traffic Monitoring
Kiju provides precise and accurate measurements for
- Vehicle and human identification
- Trajectories of the objects in the scene
- Counts and flow rates of the objects
Other applications of RFNav’s low-cost, high definition Kiju Radar
- Vehicle and human identification
- Trajectories of the objects in the scene
- Counts and flow rates of the objects
RFNav Kiju : scalable, low cost, high definition imaging, all weather radar
Learn More About Kiju
Creating New Opportunities In Radar
Farragut Square, Washington, DC

RFNav's Radar Vibraton Mitigation Decades Ahead of Industry Leaders
A Snapshot of our single look sparse array vibration migration exceeding current solutions, with RFNav’s integrated math and AI/ML algorithms.

Ideal

Corrupted Image Due to Sensor Vibration

RFNav’s Vibration Compensation
RFNav Dramatically Improves The State of The Art In Tracking
RFNav’s signature proprietary association algorithms + AI are both astonishingly accurate and surprisingly robust… Easily tracking hundreds or thousands of objects in dense scenes.
1970s

- Yes : Kalman Tracker

Which Tracker can handle these scenes?
Which Tracker can handle these scenes?

50+ years later

- No : Kalman +variants
- Yes : RFNav Tracker
A New Era In Ultrasound Imaging
RFNav’s unique ultrasonic transducer design dramatically minimizes the Blind Zone and Ringing Artifacts. The resulting ultrasound images display a precision and quality unmatched by the current state of the art.

Current Art

RFNav
Applications include gaming, medical imaging, fingerprint ID, and dozens of others.
Jobs @ RFNav
Battery Scientist
Benefits
- Excellent Compensation
- Distinction as Contributor to the Next Revolution in Battery System Design
- Flex Schedule, Excellent Benefits
Requirements
- 5+ Yrs Experience in Battery Research including Lithium Metal Anodes
- Strong Background in Electrochemistry, Electrode Thermo-Mechanical Properties, SEI characterization, and Electrode Surface Engineering
- Comfortable with cross-sectional analysis including SEM/EDS, Raman, IR
- Starts as a contract position, converts to full-time.
Physicist/Acoustic Engineer
Benefits
- Excellent Compensation
- Exciting Opportunity to be Part of Multi-disciplinary Team that Disrupts Batteries with Acoustics
- Flex Schedule, Excellent Benefits
Requirements
- 10+ Yrs Experience in Acoustics Research
- Multiphysics Simulation Experience including Acoustics, Acoustic Flow, and Heat Transfer
- Experience Iterating Multiphysics Models from Lab Measurements
- Starts as a contract position, converts to full-time.
FPGA Engineer
Benefits
- Excellent Compensation
- Be Part of an Exciting Startup in the AV & EV Space
- Flex Schedule, Excellent Benefits
Requirements
- Experience with Xilinx and Microsemi FPGA Devices & Dev tools
- 4+ Yrs Experience with ModelSim, C
- Coding experience includes UART, SPI, CSI-2, i2c, GPIO, USB
- Starts as a contract position, converts to full-time.
Social Media Guru
Benefits
- Great Compensation
- Exciting Opportunity with Startup in AV & EV & Battery Spaces
- Flex Schedule
Requirements
- Experienced Storyteller and Social Media Champion
-
Marketing Experience with
- Solid Writing & Video Editing Skills
- Part time. Send an example of your social media savvy.
Want to join RFNav’s imaging team? Solve this problem:
Given A, B, C, {A -> B, C -> D} find D

Winner will be granted interview with RFNav
Hint 1
alpha beta gamma
Hint 2
delta epsilon zeta
Videos
- RFNav’s solution for the EV lithium-ion battery problem of longevity and safety
- RFNav's Kiju Radar produces lidar-like crisp boundaries of complex objects at distance in all weather
- RFNav's Kiju Radar navigation in,
(1) clear weather (optical view)
(2) light snow (optical & radar view)
(3) light snow & fog (optical & radar view)
(4) light snow & fog (optical only view)
(5) light snow & fog (radar only view)
Business FAQs
I have an idea and want to build a prototype. Where can RFNav help?
We can help at the idea, R&D&M, and prototype levels.
Idea level. Imagine a meeting with you, your team, and our team at a whiteboard. Together we brainstorm your exciting disruptive tech & business ideas.
R&D&M level. Our non-conventional team does the Research & Development with innovative SW/Algo/AI/ML and HW designs that realizes your requirements as a system design with a path to Manufacturing. In many cases a by-product is the creation of multiple IP that brings added value to your company.
Prototype level. We help you realize your idea as an MVP or fully functioning prototype that meets your SWAPPC (size, weight, power, performance, cost) requirements. We can amplify and accelerate your existing path, or solve/build your prototype from scratch, soup to nuts, from algorithms/AI to system design to a fully function prototype.
Idea level. Imagine a meeting with you, your team, and our team at a whiteboard. Together we brainstorm your exciting disruptive tech & business ideas.
R&D&M level. Our non-conventional team does the Research & Development with innovative SW/Algo/AI/ML and HW designs that realizes your requirements as a system design with a path to Manufacturing. In many cases a by-product is the creation of multiple IP that brings added value to your company.
Prototype level. We help you realize your idea as an MVP or fully functioning prototype that meets your SWAPPC (size, weight, power, performance, cost) requirements. We can amplify and accelerate your existing path, or solve/build your prototype from scratch, soup to nuts, from algorithms/AI to system design to a fully function prototype.
What is your primary focus? Looking at your site, it seems you offer custom R&D in addition to automotive imaging radar and battery design.
Our primary focus is you. We align ourselves with your goals.
Our existing SW/Algo/AI and system designs are available to accelerate your path to a working prototype.
Our existing SW/Algo/AI and system designs are available to accelerate your path to a working prototype.
I want to acquire your company & IP. Would you be receptive?
We are selective in our associations. In that light, feel free to Contact us.
Battery FAQs
What does your ultrasound enhanced battery look like?
For a jelly roll style battery, a low cost/weight/volume thin film containing the electronics and transducers are coupled to the axial end caps.

Figure B1. Illustration of RFNav ultrasound enhanced battery concept.
What is your path forward on your ultrasound enhanced battery development?
We are seeking a sponsor for R&D support and prototype development. We have developed relationships with state-of-the-art battery researchers at LLNL and PMUT researchers at Penn State. We are ready to roll; just need to be turned On.
When will you have a working prototype of your ultrasound enhanced battery?
About 1 year, after turn On.
Radar FAQs
I’m in the autonomous vehicle industry and found your site. What differentiates you from others working the all weather vision problem?
Our Kiju radar provides the highest definition, all weather 4D imaging system for autonomous cars and trucks at the lowest system cost ($/voxel). We have solved problems that industry hasn’t even thought about when it comes to high definition imaging on moving vehicles. Our tech can leap frog your company ahead of the current art in all weather high def imaging by a decade or more.
Why is RFNav Radar technology crucial to AVs?
AV sensing and safe navigation begins with high resolution crisp imagery that can “see” long distance, day or night, in clear or adverse weather. One big elephant in the room is that imagery produced by lidar and cameras are severely degraded in adverse weather.
In contrast RFNav’s Kiju radar sensor, produces sharp high definition, fast frame rate, 4-D voxels (x,y,z and Doppler) and 3-D imagery in all weather conditions. Our images have lidar comparable spatial resolution. Our low cost ($/voxel) sensor enjoys a 3000:1 wavelength advantage over Lidar for weather penetration.
RFNav’s tech is more than the imaging sensor itself. Our tech includes a suite of intelligent algorithms that solve hard problems such as multiple sensor fusion, image artifact removal, image de-blurring, automatic calibration, dense scene tracking, object identification, RF domain mapping, and much more.
In contrast RFNav’s Kiju radar sensor, produces sharp high definition, fast frame rate, 4-D voxels (x,y,z and Doppler) and 3-D imagery in all weather conditions. Our images have lidar comparable spatial resolution. Our low cost ($/voxel) sensor enjoys a 3000:1 wavelength advantage over Lidar for weather penetration.
RFNav’s tech is more than the imaging sensor itself. Our tech includes a suite of intelligent algorithms that solve hard problems such as multiple sensor fusion, image artifact removal, image de-blurring, automatic calibration, dense scene tracking, object identification, RF domain mapping, and much more.
What distinguishes the RFNav Kiju radar from other radar sensors?
Today’s AV radar systems are low resolution (fat beams) with limited imaging capabilities. Traditional imaging requiring movement (Doppler) takes time. The resulting radar images are slow to form and coarse.
In contrast, RFNav’s Kiju Radar sensor supports stationary real beam imaging as well as non-stationary Doppler Beam Sharpening/SAR/ISAR image products. In simpler terms, high definition, fast frame images, honest in both Az & El angles, are generated for AVs both at rest and in motion.
Our Kiju system design provides the lowest imaging cost ($/voxel).
In contrast, RFNav’s Kiju Radar sensor supports stationary real beam imaging as well as non-stationary Doppler Beam Sharpening/SAR/ISAR image products. In simpler terms, high definition, fast frame images, honest in both Az & El angles, are generated for AVs both at rest and in motion.
Our Kiju system design provides the lowest imaging cost ($/voxel).
What weather conditions can your sensor see through?
The RFNav’s Kiju Radar sensor performs imaging and scene interpretation in fog, dust, smoke, smog, haze, rain, and snow where Lidars and cameras fail.
What does the Kiju Radar sensor look like and where would a system with lidar comparable resolution be placed on an AV?
The sensor has a Boolean aperture architecture comprised of two external components. One part looks like a horizontally oriented thin ribbon, the second looks similar, but shorter, as a vertically slanted thin ribbon. The Boolean aperture mounting locations have flexibility. One example shows locating the apertures at the front windshield edge as illustrated in Figure R1 below.

Figure R1. Kiju Radar Sensor on AV with lidar comparable resolution.
What is RFNav’s Kiju Radar spatial resolution and image frame rate? What is the single look field-of view for the RFNav sensor?
The RFNav’s Kiju Radar sensor performance and size is scalable. One model has the following lidar comparable spatial/range resolution, higher frame rate, and FOV at 77 GHz,
Aximuth beamwidth | < | 0.24 degs |
---|---|---|
Elevation beamwidth | < | 1.1 degs |
Range Resolution | < | 0.2 meters |
Frame Rate | > | 2 kHz |
Aximuth | < | ± 70 degs |
Elevation | < | ± 55 degs |
Range Limit | < | 250 meters |
What does a single frame real-beam image look like when the AV equipped with the Kiju Radar and another car are both stationary (where Doppler is useless)?
A simulated Kiju radar single fast time frame image product for a stationary car at 42 feet from a stationary AV is shown in Figure R2. Notice that the image conveys high precision localization of the vehicle’s contour, edges and size including such details as the passenger mirror, bumper, trunk, wheel wells, and rear window trims.

Figure R2. Single frame, fast time, real-beam image example of a stationary car.
In Figure R2, the Kiju radar’s 4-D voxels, 3-D plus Doppler, are projected down to the 2-D plane. The car was modeled as a sparse collection of single point scatterers. Simplifying assumptions included no multipath, no multi-bounce, and no vehicle interior scatterers.
How does RFNav Kiju aperture compare with MIMO antenna arrays?
Some MIMO 2-D array designs have sparse T/R allocations compared to conventional 2-D electronically steered arrays. Unfortunately, large MIMO arrays with Lidar comparable resolution, Figure R3, are expensive and consume significant cross-sectional area. The large MIMO arrays have additional difficulties including maintaining calibration in the presence of array vibration.

Figure R3. Approximate size of a MIMO aperture with lidar comparable resolution.
In contrast, the RFNav Boolean aperture architecture yields a lower cost, more flexible, design (Figure R1) compared to a cost optimized 2-D MIMO array. The Boolean aperture has fewer T/R modules and a more robust reference signal distribution. The result is a lower system and installation cost, with simpler auto-calibration for internal motion compensation.
How does RFNav Kiju aperture compare with metamaterial apertures?
Some metamaterial antennas perform phase steering by changing the dielectric properties of each cell in a large array of cells. Analog beam forming is realized with a feed network and appropriate beam steering. These arrays typically have a single beam port and a single T/R module resulting in low cost/power. Disadvantages include slow imaging frame rates, difficulty with interference or large signal return cancellation, difficulty with real-time calibration, and difficulty with single pulse short to long range beam compensation. Further, to obtain Lidar comparable resolution the metamaterial aperture dimension occupies significant cross-section area, >0.9 meters wide by >0.2 meters tall. The latter is a challenge to locate on AVs (Figure R4).

Figure R4. Approximate size of a metamaterial aperture with lidar comparable resolution.
In contrast, the RFNav Boolean aperture architecture supports fast frame rates, adaptive interference and clutter cancellation (STAP), real-time calibration, single pulse variable range beam compensation, with minimal cross-section resulting in a low cost, more subtle, installation with an overall low system cost (Figure R1).
How do Luneburg and Rotman lens compare with RFNav’s Boolean Aperture?
Luneburg lenses are analog beam forming apertures. The classical Luneburg lens is sphere with a dielectric constant that decreases with increasing radius. The result is that multiple simultaneous beams can be formed (Figure R5) at relatively low cost/power compared to a classical electronically steered phased array.

Figure R5. Illustration of focus of plane wave to one beam port by a classical Luneburg lens.
While other Luneburg lens topologies include more volume efficient cylindrical and disk-shaped topologies with non-radially symmetric dielectric distributions, the beamwidth is still limited by the aperture’s physical cross-section. Other challenges with Luneburg lenses include difficulty with interference or large nuisance signal cancellation, difficulty with real-time calibration, and difficulty with single pulse short to long range beam compensation. Like the metamaterial and MIMO antennas, the aperture field of a classical Luneburg lens required to obtain Lidar comparable azimuthal angular resolution has large cross-section area, >0.9 meters diameter (Figure R6).

Figure R6. Approximate size of a classical Luneburg lens with lidar comparable resolution.
The Rotman lens aperture is another analog beam forming device supporting multiple simultaneous beams (Figure R7).

Figure R7. Illustration of Rotman lens topology and plane wave focus to one beam port.
Some of the Rotman lens challenges are similar to the Luneburg lens including difficulty with large interference or nuisance signal return cancellation, difficulty with real-time calibration, difficulty with single pulse short to long range beam compensation, relatively high insertion loss, and the need for a large area to satisfy the aperture field of 0.9 meters by 0.2 meters for Lidar comparable angular resolution.
In contrast to both Luneburg and Rotman lens, the RFNav Boolean aperture architecture supports adaptive interference and clutter cancellation (aka STAP), real-time calibration, single pulse variable range beam compensation. The aperture’s lower cross-section and independent Boolean architecture enable more flexible mounting options with a lower system and installation cost with Lidar comparable angular resolution (Figure R1).
In contrast to both Luneburg and Rotman lens, the RFNav Boolean aperture architecture supports adaptive interference and clutter cancellation (aka STAP), real-time calibration, single pulse variable range beam compensation. The aperture’s lower cross-section and independent Boolean architecture enable more flexible mounting options with a lower system and installation cost with Lidar comparable angular resolution (Figure R1).
What does RFNav have for sensor fusion?
AVs have severe multi-mode sensor fusion challenges. As an example, cameras, lidar, radar, and ultrasound each have different propagation and scattering characteristics, voxel dimensions, sensor errors, and calibration drift. Some sensors may be in-complete (e.g. 2-D) while others may be 3-D. Further, as a group, the collection of sensors typically do not share a common phase center or base-line.
Sensor fusion can operate in many domains including pre-detection, post-detection, pre-track, and post-track. The fusion process may also work at different feature extraction levels from primitive (wavelets) to higher level features such as geometric shape bases to abstract levels including object identification.
One of RFNav’s sensor fusion algorithms operates at the early pre and post-detection processing stages, close to the sensor hardware. The algorithm develops fused N-D voxels from multiple incomplete (2-D) sensors and/or complete (3-D + Doppler) sensors with different physical locations. The remarkable aspect of RFNav’s technology is the elimination of almost all of the fusion errors, aka fusion “ghosts.”
A simulation example showing fusion of two incomplete (2-D) sensors that do not share a common baseline is shown in Figure R8. In this example each sensor is a 2-D radar each with different spatial resolutions or voxel dimensions. Each sensor gets one fast-time single pulse look at a set of single point scatterers arranged in the shape of the letters. Both pre and post-detection sensor data is presented to the fusion algorithm.
Sensor fusion can operate in many domains including pre-detection, post-detection, pre-track, and post-track. The fusion process may also work at different feature extraction levels from primitive (wavelets) to higher level features such as geometric shape bases to abstract levels including object identification.
One of RFNav’s sensor fusion algorithms operates at the early pre and post-detection processing stages, close to the sensor hardware. The algorithm develops fused N-D voxels from multiple incomplete (2-D) sensors and/or complete (3-D + Doppler) sensors with different physical locations. The remarkable aspect of RFNav’s technology is the elimination of almost all of the fusion errors, aka fusion “ghosts.”
A simulation example showing fusion of two incomplete (2-D) sensors that do not share a common baseline is shown in Figure R8. In this example each sensor is a 2-D radar each with different spatial resolutions or voxel dimensions. Each sensor gets one fast-time single pulse look at a set of single point scatterers arranged in the shape of the letters. Both pre and post-detection sensor data is presented to the fusion algorithm.

Current Art

RFNav
Figure R8. Example of low level 3-D voxel fusion for two 2-D sensors.
Left, traditional method corrupted with ghosts. Right, RFNav sensor fusion.
On the left is an image using a traditional fusion method. The dark dots that are not co-located on the letters indicate fusion ghosts. Downstream multi-look tracking and subsequent AI are compromised by these ghosts. On the right is the RFNav fusion algorithm result. Almost all ghosts are eliminated. The handful of ghosts remaining fall outside the constant down range cell where the letter targets are located.
On the left is an image using a traditional fusion method. The dark dots that are not co-located on the letters indicate fusion ghosts. Downstream multi-look tracking and subsequent AI are compromised by these ghosts. On the right is the RFNav fusion algorithm result. Almost all ghosts are eliminated. The handful of ghosts remaining fall outside the constant down range cell where the letter targets are located.
Most radar images I have seen are blobs. How does RFNav construct the crisp 3D and 4D images mmWave radar images?
AVs operate in dense scenes with many targets with much divergence in radar dynamic range. For example, a small child standing next to a parked car may appear as a weak target, masked or obscured, by the processing sidelobes of the large signal from an adjacent car door. The sidelobes result in image “blurring,” functionally a loss of resolution and an increase in image entropy, creating false signals.
RFNav has developed new algorithms, a mixture of signal processing and machine learning, that reduces the spectral domain “blurring” that occurs in conventional radar processing.
To illustrate the crispifying capability of the RFNav machine, the mmWave signal returns from a set of single point scatterers in the shape of the “RFNav” letters in height and range was simulated. Data was formed for a single fast time look. The scatterers have a distribution of radar cross section spanning 80 dB dynamic range. In this scenario, very strong signals can appear side by side with very weak signals. A Hann weighted power spectrum of the signal set is shown on the left side of Figure R9. The middle shows a conventional method to reduce sidelobes resulting in an average blur of 1.66 voxels. On the right is the resulting image after being processed by RFNav’s algorithm, resulting in an average blur reduction to 0.99 voxels.
RFNav has developed new algorithms, a mixture of signal processing and machine learning, that reduces the spectral domain “blurring” that occurs in conventional radar processing.
To illustrate the crispifying capability of the RFNav machine, the mmWave signal returns from a set of single point scatterers in the shape of the “RFNav” letters in height and range was simulated. Data was formed for a single fast time look. The scatterers have a distribution of radar cross section spanning 80 dB dynamic range. In this scenario, very strong signals can appear side by side with very weak signals. A Hann weighted power spectrum of the signal set is shown on the left side of Figure R9. The middle shows a conventional method to reduce sidelobes resulting in an average blur of 1.66 voxels. On the right is the resulting image after being processed by RFNav’s algorithm, resulting in an average blur reduction to 0.99 voxels.

Current Art

Current Art

RFNav
Figure R9. Left, power spectrum. Middle, conventional image, blur=1.66 voxels. Right, image from RFNav algorithm, blur = 0.99 voxels.
Summarizing, AVs need RFNav’s de-blurring algorithms to help mmWave imaging sensors see deep into sidelobes in a single frame. The result is improved detection of weak targets and reduced fast-time image entropy aiding downstream fusion, feature extraction, and object identification. Our algorithms will also reduce today’s non-imaging radar range and Doppler blurring.
How can I learn more about RFNav?
Contact us at info@RFNav.com
Decks
- New Imaging Sensor for AV Navigation in Rain, Fog, and Snow
- All Weather Autonomous Driving System
- Bad Weather - the Barrier to Mass Deployment of Autonomous Vehicles
- Displacing Lidar with 4-D HD Radar
- Beyond Lidar: HD Radar
- AI Frontiers Conference
- Autotech Council Navigation and Mapping Meeting
- Connected Car and Vehicle Conference
AI/ML Related Book Chapter
Chapter to appear in Natural Intelligence Neuromorphic Engineering (NINE)
Inventions
Topic Area | Example Applications | US Patent # |
---|---|---|
Low Cost High Definition 3D Radar Imaging for All Weather Autonomous Vehicle Navigation | Precise Navigation for AVs, Drones, Robots. Security Scanning for Airports & Stadiums | 9739881 |
9983305 | ||
Multi-mode Sensor Fusion | AV Camera, Lidar, Radar, Sonar Image Fusion | 9903946 |
3D & 4D Image De-Blurring | Crisp Imaging of Vehicles, Animals, Humans, Concealed Weapons & ID | 9953244 |
Vibration Compensation in mmWave Imagery | RFNav Trade Secret | |
Dendrite mitigation in Lithium Anode Batteries | Increased Battery Life and Safety for EVs, Smartphones, Laptops, Home & Grid Energy Storage | 11158889 |
11380944 | ||
High Contrast Ultrasound Imaging | Medical Imaging & Fingerprint Scanning | 11950956 |