Contact

Get in touch with us

Please see our Privacy Policy regarding how we will handle this information.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Alternative Drone Navigation for the EW Era

Moving beyond vision-based navigation for Class 1-2 drones

Drones are poised to be the future of warfare

Consumer-grade sUAS are inherently vulnerable to EW

Look beyond vision-based systems to assure PNT

From DC to the Donbas, drones are now universally recognized as the future of warfare. Today in Ukraine they are more deadly than artillery, accounting for anywhere up to eight in 10 Russian casualties on the frontlines. 

If drones are that future, smaller first person view (FPV) drones will be the new infantry. Ukrainian FPV production alone has now scaled to 200,000 a month, while in Britain, supplies to the same frontlines will increase ten-fold this year.

Operators are quickly discovering these smaller devices can complete the same missions as larger UAVs – for a fraction of the price. The economics of the battlefield are shifting. Today, a four figure quadcopter is well-equipped enough to spot – and in some cases engage with – targets that would otherwise require a multimillion-dollar aircraft or targeted missile strike.

However, these same Class 1 and 2 drones remain incredibly attritable. In the first few months of the war in Ukraine, 90% of sUAS were grounded, surviving for an average between three and six flights. Two years on, not much has changed, with Ukraine neutralizing nearly 200 Russian sUAS via EW in a single night over Kyiv just this month (June 2025).

These cost-effective drones, while revolutionary, come with an Achilles’ heel: they are easy pickings for EW interference. This is a weakness that their larger, more sophisticated UAV counterparts are increasingly better equipped against. 

In Numbers: sUAS on the frontlines 

With traditional anti jamming and spoofing devices such as nullers and beamformers too large, heavy, and power hungry for these Class 1-2 assets – units are being forced to get innovative. In Ukraine, some operators are now using 20 km long fiber optic tethers to create a hardwired relay between drone and operator. 

The extreme nature of these workarounds not only show how widespread jamming has become, but also how reliant those in the field are now on these drones. 

The Rise of Vision-Based Navigation

Software – specifically computer vision – has been a significant breakthrough in protecting sUAS. A FPV drones’ onboard camera is incredibly data-rich – and major alternative navigation providers are now leveraging those inputs in visual-based navigation (VBN) approaches to this critical problem space. 

Optical flow, visual odometry, simultaneous localization and mapping (SLAM) are among the most promising methods; however, VBN-reliant systems are not without their drawbacks: 

External Sensitivities 

By definition, vision systems rely on visual cues. From the Arctic to the South China Sea – these are few and far between across most of today’s contested environments. Snow sheets, deserts and large bodies of water are all but devoid of tracking points. 

In more dynamic environments, the opposite is also true. Moving subjects (cars, people, even swaying trees) trigger occlusion, and the camera is unable to ‘lock-on’ to reference features. 

Reference matching is possible in feature-rich environments, but external sensitivities can quickly reduce the quality of these inputs

These active environments are often where these drones are most mission critical, but also where computer vision techniques are put under the most strain. External light and contrast are also needed for them to function successfully– but motion blur, low light, fog, midst, and rain can also all make that extremely difficult. 

Error and Drift  

Even with high-quality reference matching, vision systems are still susceptible to drift. Visual odometry experiences error rates of a few percentile points over every few hundred meters, this increases significantly in beyond line-of-sight missions. 

For FPV drones, one camera alone is not enough to accurately judge the ground scale. Without an absolute elevation reference (which is not always possible as depth sensors add additional weight), a change in the altitude can lead to a misscale in motion.  

Power Draw

The greatest limitation on the speed and range of vision-based navigation systems is the level of computation required. 

Relying on vision introduces latency in assured PNT – and an absence in real-time data will result in oscillation, and in the worst-case scenario lead to complete failure. Heavy and continuous vision processing also adds thermal constraints – and risks the on board processors overheating. 

An Opportunistic Approach  

Despite these drawbacks – no one would question that vision data still represents a huge leap forward in equipping sUAS with alternative navigation sources for GPS degraded and denied environments. However, what also remains clear is that the longer the device is dependent on computer vision, the greater the risk placed on mission outcomes. 

Pendulum’s approach to equipping drones with alternative PNT reduces that risk by significantly reducing the role of vision data in onboard processing. 

Even Class 1 drones are equipped with the inertial sensors found on almost any consumer-grade device (barometer, altimeter, accelerometer, compass, among others). By prioritizing inertial propagation, and only using computer vision to close the loop on the projected positioning, this opportunistic approach stands to dramatically improve sUAS performance in contested settings. 

Field testing Pendulum Navigate for assured sUAS positioning, navigation, and timing (PNT). 

Not only removing the need for additional hardware and sensors, lighter weight machine-learning models massively reduce the power draw on the drone itself. In the field, that means it can fly further, longer. 

Computer vision can and will transform the future of drone navigation; however, in more contested, extreme environments, every gram of weight and watt of power counts. Multi-fusion methods that leverage inertial sensors at their core provide reliable, continuous PNT where camera data cannot always be relied on. 

If you’d like to know more about Navigate for drones, reach out at navigate@pendulum.global

Why use Pendulum?

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Gravida rutrum quisque non tellus. Enim lobortis scelerisque fermentum dui. Sed pulvinar proin gravida hendrerit lectus a. Massa sapien faucibus et molestie ac. Sed enim ut sem viverra. Ut aliquam purus sit amet luctus. Rhoncus urna neque viverra justo. Imperdiet massa tincidunt nunc pulvinar sapien et. Cursus vitae congue mauris rhoncus aenean vel elit scelerisque.

Elit pellentesque habitant morbi tristique senectus et netus. Urna neque viverra justo nec ultrices dui sapien. Porta lorem mollis aliquam ut porttitor leo a diam. Interdum consectetur libero id faucibus nisl tincidunt eget. A diam maecenas sed enim ut. Tincidunt arcu non sodales neque. In est ante in nibh. Lobortis mattis aliquam faucibus purus in massa. Nunc sed id semper risus. Habitant morbi tristique senectus et netus et. Velit aliquet sagittis id consectetur purus ut faucibus pulvinar. Fermentum odio eu feugiat pretium nibh ipsum. Nunc congue nisi vitae suscipit tellus mauris a diam. Cursus eget nunc scelerisque viverra mauris in aliquam. Phasellus faucibus scelerisque eleifend donec pretium vulputate sapien nec sagittis.

Equity is not a goal, a set of metrics or a checklist. It is a way of being, a mindset, a practice. Equity is a Verb.

~ Dr. Christine Ortiz, Founder of Equity meets design

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Why use Pendulum?

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Gravida rutrum quisque non tellus. Enim lobortis scelerisque fermentum dui. Sed pulvinar proin gravida hendrerit lectus a. Massa sapien faucibus et molestie ac. Sed enim ut sem viverra. Ut aliquam purus sit amet luctus. Rhoncus urna neque viverra justo. Imperdiet massa tincidunt nunc pulvinar sapien et. Cursus vitae congue mauris rhoncus aenean vel elit scelerisque.

Pendulum