GLONASS – GPS World https://www.gpsworld.com The Business and Technology of Global Navigation and Positioning Mon, 29 Apr 2024 19:40:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 GLONASS CDMA signals now on L1, L2 https://www.gpsworld.com/glonass-cdma-signals-now-on-l1-l2/ Mon, 29 Apr 2024 19:40:05 +0000 https://www.gpsworld.com/?p=105867 GLONASS satellites traditionally use L1 and L2 frequency division multiple access (FDMA) signals. FDMA is characterized by a […]

<p>The post GLONASS CDMA signals now on L1, L2 first appeared on GPS World.</p>

]]>
GLONASS satellites traditionally use L1 and L2 frequency division multiple access (FDMA) signals. FDMA is characterized by a different transmit frequency for each satellite. Newer satellite generations also transmit an L3 code division multiple access (CDMA) signal. CDMA uses the same frequency but different ranging codes for individual satellites. The first GLONASS K2 satellite, with the space vehicle number R803, was launched in August 2023. It extends the range of CDMA signals to the L1 and L2 bands.

Figure 1. GLONASS K2 spectrum of the L1 frequency band. The different components of the L1 CDMA signal are indicated by colored boxes. L1SC: secured signal. L1OC: open service signal. (All figures provided by the authors)

Figure 1. GLONASS K2 spectrum of the L1 frequency band. The different components of the L1 CDMA signal are indicated by colored boxes. L1SC: secured signal. L1OC: open service signal. (All figures provided by the authors)

Frequency spectra of R803, including these new signals, are shown in Figures 1 and 2. They were measured with the 30 m high-gain antenna of the German Space Operations Center (GSOC) in Weilheim, Germany, on Jan. 17, 2024. The largest and sharpest peak in the L1 band at 1,598.625 MHz originates from the 0.5 MHz binary phase-shift keying (BPSK) FDMA signal. The center peak of the L1 CDMA signal is located at 1,600.995 MHz. It is related to the L1 open service signal consisting of a data component (L1OCd) and a pilot component (L1OCp). L1OCd and L1OCp are combined by time-division multiplexing. The peaks that are ±5 MHz away from the L1 CDMA center frequency are introduced by the binary offset carrier (BOC) modulation of the secured L1SC signal. Prominent L1SC side lobes are visible ±15, ±25 and ±35 MHz offset from the center frequency. A quadrature phase-shift keying (QPSK) modulation is used to combine the L1OC and L1SC signals. The local minimum between 1,610 MHz and 1,614 MHz is caused by a notch filter onboard the satellite to protect radio astronomical observations of the Hydroxyl spectral line at 1,612 MHz.

Figure 2. GLONASS K2 spectrum of the L2 and L3 frequency bands. The different components of the L2 CDMA signal are indicated by colored boxes. L2SC: secured signal. L2xC stands for the time multiplexed L2OCp and L2 CSI signal. (All figures provided by the authors)

Figure 2. GLONASS K2 spectrum of the L2 and L3 frequency bands. The different components of the L2 CDMA signal are indicated by colored boxes. L2SC: secured signal. L2xC stands for the time multiplexed L2OCp and L2 CSI signal. (All figures provided by the authors)

The L2 CDMA signal is composed of a signal for service information (L2 CSI) and the pilot open service navigation signal (L2OCp). As for L1, these two signals are time-division multiplexed and combined with the secured L2SC signal by QPSK. The left main lobe of the L2SC signals coincides with the L2 FDMA center frequency of 1,243.375 MHz. Both, the L2 CSI, as well as the L2OCp signal, contribute to the peak at the L2 CDMA center frequency at 1,248.06 MHz. The L3 CDMA signal is composed of 10 MHz BPSK data (L3OCd) and pilot (L3OCp) components resulting in a broad peak at 1,202.025 MHz.

FDMA and CDMA signals of GLONASS R803 were tracked with a JAVAD TRE_3S receiver with a prototype firmware located at GSOC in Oberpfaffenhofen, Germany. Figure 3 shows the differences between pseudo range and carrier phase observations for the FDMA and CDMA signals in the L1 and L2 frequency bands. Long-term ionospheric effects were removed by a second-order polynomial. Thus, remaining effects include short-term ionospheric variations, multipath, and observation noise. The standard deviation of the code–carrier combination is, in general, at the half-meter level. Due to their advanced design, the CDMA signals show an improved performance by 18% for L1 and even 31% for L2 compared to the legacy FDMA signals.

Figure 3. Code – carrier for GLONASS R803 FDMA and CDMA signals: L1 (left) and L2 (right). A second order polynomial has been removed and the CDMA signals are shifted by 4 m. (All figures provided by the authors)

Figure 3. Code – carrier for GLONASS R803 FDMA and CDMA signals: L1 (left) and L2 (right). A second order polynomial has been removed and the CDMA signals are shifted by 4 m. (All figures provided by the authors)

Further launches of L1 and L2 CDMA-capable GLONASS K2 satellites are planned for the upcoming years. A constellation of at least 12 satellites is expected for 2030. To guarantee backwards compatibility, these satellites will also transmit the L1 and L2 FDMA signals. Further improvements in positioning accuracy are expected due to improved satellite clocks and inter-satellite laser ranging.

Further reading

Karutin, S. (2023), “GLONASS: The decade of transition to CDMA signals,” GPS World, Vol. 34, No. 12, pp. 39-41

Russian Space Systems (2016), GLONASS Interface Control Document: Code Division Multiple Access Open Service Navigation Signal in L1 frequency band. Russian Rocket and Space Engineering and Information Systems Corporation, Joint Stock Company.

Russian Space Systems (2016), GLONASS Interface Control Document: Code Division Multiple Access Open Service Navigation Signal in L2 frequency band. Russian Rocket and Space Engineering and Information Systems Corporation, Joint Stock Company.

Manufacturers

GNSS data used in this article were collected with a JAVAD TRE_3S receiver. The spectral overviews were captured with a Rohde & Schwarz FSQ26 signal analyzer.

<p>The post GLONASS CDMA signals now on L1, L2 first appeared on GPS World.</p>

]]>
GLONASS: The decade of transition to CDMA signals https://www.gpsworld.com/glonass-the-decade-of-transition-to-cdma-signals/ Wed, 20 Dec 2023 14:00:37 +0000 https://www.gpsworld.com/?p=104896 GLONASS remains a core of Russia’s positioning, navigation and timing (PNT) system and is utilized by people around the world. Annual shipments of new GLONASS/GNSS receivers for the communications, transport, agriculture and power industries exceed 25 million units in Russia alone.

<p>The post GLONASS: The decade of transition to CDMA signals first appeared on GPS World.</p>

]]>
Figure 1. Initial GLONASS FDMA signals spectrum in L1 band. Image: Sergey Karutin

Figure 1. Initial GLONASS FDMA signals spectrum in L1 band. Image: Sergey Karutin

GLONASS remains a core of Russia’s positioning, navigation and timing (PNT) system and is utilized by people around the world. Annual shipments of new GLONASS/GNSS receivers for the communications, transport, agriculture and power industries exceed 25 million units in Russia alone. These users are interested in continuously increasing the quality of PNT primarily based on the improvement of the basic service radio navigation field generated by the GLONASS space complex.

This space complex consists of the constellation comprising medium-Earth orbit (MEO) satellites, the modernized ground control complex and the ensemble of user equipment. The current constellation consists of 26 satellites comprising three generations and five modifications. For the past 15 years, GLONASS-M has been the core satellite and now the constellation includes 21 of them. The fact that 14 of them successfully function beyond their guaranteed active lifetime verifies their high reliability. They are steadily being replaced with GLONASS-K satellites, of which there are already four in the constellation. Along with GLONASS-K launches, the in-orbit testing of the first GLONASS-K2 satellite was initiated on August 7, 2023.

Since the launch of the first GLONASS satellite, the navigation signals have changed significantly. Initially, each of 24 GLONASS satellites transmitted the signals with its own separate carrier frequencies in the L1 and L2 bands (Figure 1). The total bandwidth of the registered GLONASS satellite network was 23.72 MHz in L1 band and 20.72 MHz in L2 band, respectively.

Figure 2. First phase GLONASS FDMA signals spectrum transformation in L1 band. Image: Sergey Karutin

Figure 2. First phase GLONASS FDMA signals spectrum transformation in L1 band. Image: Sergey Karutin

Figure 3. Second phase GLONASS FDMA signals spectrum transformation in L1 band. Image: Sergey Karutin

Figure 3. Second phase GLONASS FDMA signals spectrum transformation in L1 band. Image: Sergey Karutin

Figure 4. Final GLONASS FDMA signals spectrum in L1 band. Image: Sergey Karutin

Figure 4. Final GLONASS FDMA signals spectrum in L1 band. Image: Sergey Karutin

In 1995, the Russian Federation assumed obligations to protect the band used in radio astronomy in the search for extraterrestrial life. At the first stage (until 1998), the broadcast of the navigation signals in the carrier frequency channels 16-20 was terminated and the frequency channels 13, 14, 20 and 21 were used under exceptional circumstances (Figure 2). Then, all newly launched satellites transmitted the signals only in the frequency channels 0-12. By 2005, the total bandwidth of GLONASS satellites was reduced to 16.97 MHz in L1 band and 15.47 MHz in L2 band respectively (Figure 3).

Starting in 2005, GLONASS satellites have been using the frequency channels from -7 to +6 (Figure 4) to broadcast frequency division multiple access (FDMA)  navigation signals. As a result, the upper limit of the GLONASS signal bandwidth in the L1 band dropped from 1620.61 to 1610.485 MHz and the lower limit went down from 1596.89 to 1592.953 MHz. The signal bandwidth in L2 band changed similarly.

The GLONASS-K2 satellite was developed to improve GLONASS user performance. The satellite broadcasts new code division multiple access (CDMA) signals in the above mentioned bands as well as in the L3 band. The first satellite of this batch was successfully deployed in orbit on August 7, and already started to broadcast the new CDMA signals. The radio telescope of Bauman Moscow State Technical University is used to monitor the broadcast signals to analyze the frequency and power characteristics of the satellite.

The radio telescope has a large-aperture fully rotatable antenna with a dish diameter of 7.75 m. It ensures that the width of the main lobe of the antenna’s pattern in 1.6 GHz band is 1.8° and the power amplification of the received navigation signals is 40 dB.

Primarily, users are interested in the new CDMA navigation signal on L1OC transmitted along with the conventional signal on L1OF. The joint group bandwidth of the FDMA signals with the carrier frequency 1598.625 MHz, which refers to the frequency channel -6, and the CDMA signals with the carrier frequency 1600.995 MHz is shown in Figure 5.

The exploitation experience of recently manufactured satellites in practice demonstrates that their operational capacity exceeds their planned lifetime by one and a half times. The final GLONASS-M satellite (No. 761) launched in the last year was manufactured in 2015. These circumstances make it possible to predict that the renewal of the whole constellation with new GLONASS-K2 satellites broadcasting the full ensemble of CDMA signals is likely to be finished by 2035.

In 2024, the renewal of the constellation will continue due to the launches of GLONASS-K satellites and another GLONASS-K2 satellite.

Figure 5. FDMA and CDMA signals spectrum in L1 band, broadcasted by first Glonass-K2 satellite. Chart: Bauman Moscow State Technical University

Figure 5. FDMA and CDMA signals spectrum in L1 band, broadcasted by first Glonass-K2 satellite. Chart: Bauman Moscow State Technical University

With the launch of the first GLONASS-K2 satellite accomplished, the Passive Quantum-Optical System (PQOS) is implemented on the base of Russian quantum-optical systems with a wavelength of approximately 0.5 nm. The PQOS ensures pseudorange measurements in the optical band. The elements of the system include specialized ground equipment to register moments of laser pulse emission by a ground laser station (ground PQOS) and specialized satellite payload equipment to register moments of the laser pulse reception onboard (onboard PQOS). Therefore, all GLONASS new generation satellites are capable of performing both conventional active (two-way) measurements and passive (one-way) measurements with the accuracy of timescale difference definition better than a nanosecond and based on the data of laser optical systems.

The processing of active and passive measurements gives an opportunity to get their difference combinations to compare timescales kept by onboard and ground frequency standards at a previously unachievable picosecond level of precision. The accuracy of PQOS results is sufficient to provide in-orbit tests of prospective new generation onboard frequency standards with a daily stability σ around 5×10-15.

The achieved accuracy level of PQOS results is also sufficient to calibrate measurement links for prospective GLONASS satellites, including links of active measurement systems, inter-satellite links and ionosphere-free linear measurement combinations conducted by passive measurement equipment based on FDMA and CDMA signals. The obtained results correspond to the world accuracy level in metrology and ensure the uniformity of measurements. The developed PQOS and technologies based on its measurements fully contribute to the effective metrological support for the tests and operation of the GLONASS space complex, including prospective GLONASS-K2 satellites and the ground complex.

<p>The post GLONASS: The decade of transition to CDMA signals first appeared on GPS World.</p>

]]>
Russia launches Glonass-K2 No. 13 https://www.gpsworld.com/russia-launches-glonass-k2-no-13/ Fri, 25 Aug 2023 18:00:53 +0000 https://www.gpsworld.com/?p=103565 The Russian Federal Space Agency has launched one of its Glonass global positioning satellites, Glonass-K2 No. 13 (Kosmos 2569), into medium-Earth orbit (MEO) on August 7.

<p>The post Russia launches Glonass-K2 No. 13 first appeared on GPS World.</p>

]]>
GLONASS image001

Image: GLONASS

The Russian Federal Space Agency has launched one of its Glonass global positioning satellites, Glonass-K2 No. 13 (Kosmos 2569), into medium-Earth orbit (MEO) on August 7, at 13:20 UTC, reported Everyday Astronaut and Russian Space Web. The satellite was launched on the Soyuz 2.1b launch vehicle from Plesetsk Cosmodrome, in Russia.

Glonass-K2 No. 13 was launched to improve the accuracy of the Russian dual-use global positioning system. The K2 satellites are the fourth iteration in satellite design for GLONASS.

The new generation of satellites provide navigation accuracy of less than 30 cm and feature an unpressurized satellite bus (Ekspress-1000) manufactured by ISS Reshetnev. The satellites also use a novel navigation signal, code-protected selection, to transmit three signal types, including two in the L1 and L2 ranges for military users, and one channel in the L1 range accessible to the civilian users.

Each K2 satellite weighs 1,645 kg and has an operational lifetime of 10 years.

<p>The post Russia launches Glonass-K2 No. 13 first appeared on GPS World.</p>

]]>
Faux signals for real results: Racelogic https://www.gpsworld.com/faux-signals-for-real-results-racelogic/ Wed, 23 Aug 2023 13:00:42 +0000 https://www.gpsworld.com/?p=103453 GPS World Editor-In-Chief, Matteo Luccio, talks the challenges and prospects in the simulator industry with Julian Thomas, managing director, Racelogic.

<p>The post Faux signals for real results: Racelogic first appeared on GPS World.</p>

]]>
An exclusive interview with Julian Thomas, managing director, Racelogic. For more exclusive interviews from this cover story, click here.


In which markets and/or applications do you specialize?

We originally designed our LabSat simulator for ourselves, because we supply GPS equipment to the automotive market. Then, we decided to sell it into that market, which is our primary market, for other people to use. That’s where we started, but it has moved on since then. We supply many of the automotive companies who use it for testing their in-car GPS-based navigation systems.

However, we’ve moved on to our second biggest market, which is the companies that make deployment systems for internet satellites, which use it for end-of-life testing. Several of our customers use it. That’s because we do space simulations, so we can simulate the orbits of satellites. That’s very useful when they’re developing their satellites.

We supply many of the major GPS board manufacturers — such as NovAtel, Garmin, and Trimble — when they’re developing their boards and testing their devices. We supply many of the phone companies — such as Apple and Samsung — and many of the GPS chip manufacturers — such as Qualcomm, Broadcom, and Unicom. More or less any company that’s into GNSS.

How has the need for simulation changed in the past five years, with the completion of the BeiDou and Galileo GNSS constellations, the rise in jamming and spoofing threats, the sharp increase in corrections services, and the advent of new LEO-based PNT services?

It all started off very simple, with just GPS, which was one signal and one frequency. We got that up and working very well and it helped us a lot. Then we got into this market. In the last few years, we’ve had to suddenly invent 15 new signals. We do two systems, really: one is a record-and-replay system. You put a box in a car, on a bike, in a backpack, or on a rocket, and you record the raw GPS signals; then you can replay those on the bench. That requires greater bandwidth, greater bit depth, smaller size, battery power, all of that.

The other is pure signal simulation. We simulate the signals coming from the satellites from pure principles. So, we’ve had to dive into how those signals are structured, reproduce them mathematically, and then incorporate that in into our software. That’s been 15 times the original work we thought it would be, but as we add each signal it tends to get a bit simpler until they add new ways to encode signals, and then it gets complex again. We’ve had to increase our bandwidth, increase our bit depth for the recording to cover all of these new signals.
Because our systems record and replay, they’re used a lot to record real-world jamming. In many scenarios, our customers will take one of our boxes into the field and record either deliberate jamming or jamming that’s been carried out by a third party. Then they can replay that in the comfort of their lab.

With regards to spoofing, we’ve just improved our signal simulation. So, we can completely synchronize it with real time. We can do seamless takeover of a GNSS signal in real time. We can reproduce the current ephemeris and almanac. If we transmit a sufficiently powerful signal, we can completely take over that device. Then we can insert a new trajectory into it. That’s a very recent update we’ve done.

If the complexity and amount of your work has gone up so much in the last few years but you cannot increase your prices at the same rate, what does that do to your business model?

It’s the same people that produce the signals in the first place, so they still have a job. However, as we add more signals and capabilities, we tend to get more customers as well.

Oh, so, you’re expanding your market!

Right, right.

Regarding some of the new PNT services being developed, how do you simulate them realistically without the benefit of recordings of live sky signals?

It is all pure signals simulation. You go through the ICD line-by-line and work out the new schemes. Here’s an interesting anecdote. Our developer who does a lot of the signal development is Polish and is also fluent in Russian. When we were developing the GLONASS signals, he was working from the English version of the GLONASS ICD. He said that it didn’t make any sense. So, he looked at the Russian version and discovered that the English one had a typo. When he used the Russian version, everything worked perfectly. He told this to his contacts at GLONASS and they thanked him and updated the English translation of their document. So, you are very, very much reliant on every single word in that ICD.

Are there typically differences between the published ICD and the actual signal?

No, no. Apart from the Russian one, which had a typo, they’re very good. For example, we’ve recently implemented the latest GPS L1C signal. My developer spent six months recreating it and getting all the maths right and the only way you could test it was to connect it to a receiver and hit “go.” It just worked the first time. He almost fell off his chair. The ICD in that case was very, very accurate.

Hope that Xona’s ICD is just as good.

Yeah.

Are accuracy requirements for simulation increasing, to enable emerging applications?

Yes, absolutely. No one can have too much accuracy. Everyone’s chasing the goal of getting smaller, faster, and more accurate systems. They want greater precision and better accuracy from their simulators, as well as a faster response. We do real-time simulators and they want a smaller and smaller delay from when you input the trajectory to when you get the output. Luckily for us, Moore’s law is still in effect, so, as the complexity of the signals and the accuracy requirements increase, computers can churn through more data. Luckily, we’re able to keep up on the hardware side as well, because much of our processing is done using software. Some companies do it in hardware and some companies do it in software. We concentrate on the software side of things.

Here’s another interesting anecdote from my Polish guy. He noticed that the latest Intel chips contain an instruction that multiplies and divides at the same time but that it wasn’t available in Windows. So, he put in a request with Microsoft for that operational code and they incorporated it into the very latest version of dotnet, which has improved our simulation time by 7%. I see little improvements like that all the time.

Are all your simulators for use in the lab or are some for use in the field? If the latter, for what applications and how do they differ from the ones in the lab? (Well, for starters, I assume that they are smaller, lighter, and less power-hungry…)

All our systems are designed to be used inside and outside the lab. They can all be carried in a backpack, on a push bike, in a car. We do that deliberately, because we come from the automotive side of things, so we have to keep everything very small and compact.

Besides automotive, what are some field uses?

Some of our customers have put them in rockets, recording the signal as it goes up, or in boats. We have people walking around with an antenna on their wrist connected to one of our systems, so that they can simulate smartwatches. There are many portable applications. We have a very small battery-powered version, which makes it very independent.

Are there any recent success stories that you are at liberty to discuss?

Our most exciting one is a seamless transition for simulation that we developed to replace or augment GPS in tunnels. We’ve been talking to many cities around the world that are building new tunnels. Because modern cars automatically call emergency services when they crash or deploy their airbags, they need to know where they are, of course. Cities need to take this into account when they are building new tunnels, which can pass over each other or match the routes of surface streets. Therefore, accurate 3D positioning in the tunnels has become essential. It requires installing repeaters every 30 meters along each tunnel and software that runs on a server and seamlessly updates your position every 30 meters. As you enter a tunnel, your phone or car navigation system instantly switches to this system. It’s been received very well because it’s mainly software and the hardware is pretty simple. We’ve brought the cost down to a fifth of the cost of standard GPS simulators for tunnels. So, we’re talking to several cities about some very long tunnels, which is very exciting.

<p>The post Faux signals for real results: Racelogic first appeared on GPS World.</p>

]]>
Far Out: Positioning above the GPS constellation https://www.gpsworld.com/far-out-positioning-above-the-gps-constellation/ Wed, 09 Aug 2023 13:00:40 +0000 https://www.gpsworld.com/?p=103326 Read Richard Langley’s introduction to this article: “Innovation Insights: Falcon Gold analysis redux” As part of NASA’s increased […]

<p>The post Far Out: Positioning above the GPS constellation first appeared on GPS World.</p>

]]>
Read Richard Langley’s introduction to this article:Innovation Insights: Falcon Gold analysis redux


Photo:

Figure 1: Diagram of cis-lunar space, which includes the real GPS sidelobe data collected on an HEO space vehicle. (All figures provided by the authors)

As part of NASA’s increased interest in returning to the moon, the ability to acquire accurate, onboard navigation solutions will be indispensable for autonomous operations in cis-lunar space (see Figure 1). Artemis I recently made its weeks-long journey to the Moon, and spacecraft carrying components of the Lunar Gateway and Human Landing System are planned to follow suit. During launch and within the GNSS space service volume, space vehicles can depend on the robust navigation signals transmitted by GNSS constellations (GPS, GLONASS, BeiDou, and Galileo). However, beyond this region, NASA’s Deep Space Network (DSN) serves as the system to track and guide lunar spacecraft through the dark regions of cis-lunar space. Increasingly, development of a lunar navigation satellite system (LNSS) that relies on a low size, weight and power (SWaP) “smallSat” constellation is being discussed for various possible orbits such as low lunar orbit (LLO), near rectilinear halo orbit (NRHO) and elliptical frozen orbit (ELFO).

Figure 2 : DPE 3D (left) and 2D (right) spatial correlogram shown on a 3D north-east grid.

Figure 2: DPE 3D (left) and 2D (right) spatial correlogram shown on a 3D north-east grid.

We have implemented direct positioning estimation (or collective detection) techniques to make the most of the limited and weak GPS signals (see Figure 2) that have been employed in other GNSS-degraded environments such as urban canyons. The algorithm used in conventional GNSS positioning employs a two-step method. In the first step, the receiver acquires signals to get a coarse estimate of the received signal’s phase offset. In the second step, the receiver tracks the signals using a delay lock loop coupled with a phase or frequency lock loop. The second step enables the receiver to get fine measurements, ultimately used to obtain a navigation solution. In the scenario addressed in our work, where a vehicle is navigating beyond the GPS satellite constellation, the signals are weak and sparse, and a conventional GPS receiver may not be able to acquire or maintain a lock on a satellite’s sidelobe signals to form a position solution. For a well-parameterized region of interest (that is, having a priori knowledge of the vehicle orbital state through dynamic filtering), and if the user’s clock error is known within a microsecond, a direct positioning estimator (DPE) can be used to improve acquisition sensitivity and obtain better position solutions. DPE works by incorporating code/carrier tracking loops and navigation solutions into a single step. It uses a priori information about the GPS satellites, user location, and clocks to directly estimate a position solution from the received signal. The delay-Doppler correlograms are first computed individually for the satellites and are then mapped onto a grid of possible candidate locations to produce a multi-dimensional spatial correlogram. By combining all signals using a cost function to determine the spatial location with the most correlation between satellites, the user position can be determined. As mentioned, signals received beyond the constellation will be sparse and weak, which makes DPE a desirable positioning method.

BACKGROUND

The proposed techniques draw from several studies exploring the use of weak signals and provide a groundwork for developing robust direct positioning methods for navigating beyond the constellation. NASA has supported and conducted several of the studies in developing further research into the use of signals in this space.

A study done by Kar-Ming Cheung and his colleagues at the Jet Propulsion Laboratory propagates the orbits of satellites in GPS, Galileo, and GLONASS constellations, and simulates the “weak GPS” real-time positioning and timing performances at lunar distance. The authors simulated an NRHO lunar vehicle based on the assumption that the lunar vehicle is in view of a GNSS satellite as long as it falls within the 40-degree beamwidth of the satellite’s antenna. The authors also simulate the 3D positioning performance as a function of the satellites’ ephemeris and pseudorange errors. Preliminary results showed that the lunar vehicle can see five to 13 satellites and achieve a 3D positioning error (one-sigma) of 200 to 300 meters based on reasonable ephemeris and pseudorange error assumptions. The authors also considered using relative positioning to mitigate the GNSS satellites’ ephemeris biases. Our work differs from this study in several key ways, including using real data collected beyond the GNSS constellations and investigating the method of direct positioning estimation for sparse signals.

Luke Winternitz and colleagues at the Goddard Space Flight Center described and predicted the performance of a conceptual autonomous GPS-based navigation system for NASA’s planned Lunar Gateway. The system was based on the flight-proven Magnetospheric Multiscale (MMS) GPS navigation system augmented with an Earth-pointed high-gain antenna, and optionally, an atomic clock. The authors used high-fidelity simulations calibrated against MMS flight data, making use of GPS transmitter patterns from the GPS Antenna Characterization Experiment project to predict the system’s performance in the Gateway NRHO. The results indicated that GPS can provide an autonomous, real-time navigation capability with comparable, or superior, performance to a ground-based DSN approach using eight hours of tracking data per day.

In direct positioning or collective detection research, Penina Axelrad and her colleagues at the University of Colorado at Boulder and the Charles Stark Draper Laboratory explored the use of GPS for autonomous orbit determination in geostationary orbit (GEO). They developed a novel approach for directly detecting and estimating the position of a GEO satellite using a very short duration GPS observation period that had been presented and demonstrated using a hardware simulator, radio-frequency sampling receiver, and MATLAB processing.

Ultimately, these studies and more have directed our research in exploring novel methods for navigating beyond the constellation space.

DATA COLLECTION

The data we used was collected as part of the U.S. Air Force Academy-sponsored Falcon Gold experiment and the data was also post-processed by analysts from the Aerospace Corporation. A few of the key notions behind the design of the experiment was to place an emphasis on off-the-shelf hardware components. The antenna used on board the spacecraft was a 2-inch patch antenna and the power source was a group of 30 NiMH batteries. To save power, the spacecraft collected 40-millisecond snapshots of data and only took data every five minutes. The GPS L1 frequency was down-converted to a 308.88 kHz intermediate frequency and was sampled at a low rate of 2 MHz (below the Nyquist rate) and the samples were only 1- bit wide. Again, the processing was designed to minimize power requirements.

METHODS AND SIMULATIONS

To test our techniques, we used real data collected from the Falcon Gold experiment on a launch vehicle upper stage (we’ll call it the Falcon Gold satellite) which collected data above the constellation on a HEO orbit. The data collected was sparse, and the signals were weak. However, the correlation process has shown that the collected data contained satellite pseudorandom noise codes (PRNs). Through preliminary investigation, we find that the acquired Doppler frequency offset matches the predicted orbit of the satellite when propagated forward from an initial state. The predicted orbit of the satellite was derived from the orbital parameters estimated using a batch least-squares fit of range-rate measurements using Aerospace Corporation’s TRACE orbit-determination software. The propagation method uses a Dormand-Prince eighth-order integration method with a 70-degree, first-order spherical harmonic gravity model and accounting for the gravitation of the Moon and Sun. The specifics of this investigation are detailed below.

Figure 3: GPS constellation “birdcage” (grey tracks), with regions of visibility near the GPS antenna boresight in blue and green for the given line-of-sight from the Falcon Gold satellite along its orbit (orange).

Figure 3: GPS constellation “birdcage” (grey tracks), with regions of visibility near the GPS antenna boresight in blue and green for the given line-of-sight from the Falcon Gold satellite along its orbit (orange).

The positions of the GPS satellites are calculated using broadcast messages (combined into so-called BRDC files) and International GNSS Service (IGS) precise orbit data products (SP3 files). GPS satellites broadcast signals containing their orbit details and timing information with respect to an atomic clock. Legacy GPS signals broadcast messages contain 15 ephemeris parameters, with new parameters provided every two hours. The IGS supports a global network of more than 500 ground stations, whose data is used to precisely determine the orbit (position and velocity in an Earth-based coordinate system) and clock corrections for each GNSS satellite. These satellite positions, along with the one calculated for the Falcon Gold satellite, allowed for the simulation of visibility conditions. In other words, by determining points along the Falcon Gold satellite trajectory, we determine whether the vehicle will be within the 50° beamwidth of a GPS satellite that is not blocked by Earth.

Figure 3 shows a plot rendering of the visibility conditions of the Falcon Gold satellite at a location along its orbit to the GPS satellite tracks. Figure 4 depicts three of the 12 segments where signals were detected and compares the predicted visibility to the satellites that were actually detected. A GPS satellite is predicted to be visible to the Falcon Gold satellite if the direct line-of-sight (DLOS) is not occluded by Earth and if the DLOS is within 25° of the GPS antenna boresight (see Figure 5).

Figure 4: Predicted visibility of direct line-of-sight to each GPS satellite where a blue line indicates the PRN is predicted to be visible but undetected. A green line is predicted to be visible and was detected, and a red line indicates that the satellite is predicted to not be visible, but was still detected.

Figure 4: Predicted visibility of direct line-of-sight to each GPS satellite where a blue line indicates the PRN is predicted to be visible but undetected. A green line is predicted to be visible and was detected, and a red line indicates that the satellite is predicted to not be visible, but was still detected.

Figure 5: Depiction of the regions of a GPS orbit where the Falcon Gold satellite could potentially detect GPS signals based on visibility.

Figure 5: Depiction of the regions of a GPS orbit where the Falcon Gold satellite could potentially detect GPS signals based on visibility.

As a preliminary step to evaluate the Falcon Gold data, we analyzed the Doppler shifts that were detected at 12 locations along the Falcon Gold trajectory above the constellation. By comparing the Doppler frequency shifts detected to the ones predicted by calculating the rate of change of the range between the GPS satellites and modeled Falcon Gold satellite, we calculated the range rate root-mean-square error (RMSE). Through this analysis, we were able to verify the locations on the predicted trajectory that closely matched the detected Doppler shifts.

These results are used to direct our investigations to regions of the dataset to parameterize our orbit track in a way to effectively search our delay and Doppler correlograms to populate our spatial correlograms within the DPE. Figure 6 shows the time history of the difference of predicted range rates on the trajectory and the detected range rates on the trajectory. That is, a constant detected range rate value is subtracted from a changing range rate for the duration of the trajectory and not just at the location on the trajectory at the detect time (dashed vertical line). From this we can see that the TRACE method gives range rates near the detected ranges at the approximate detection time for the 12 different segments.

Figure 6: Plots depicting the 12 segments of detection and the corresponding time history of differences of range-rate values for each GPS PRN detected. The time history is of the range-rate difference between the predicted range rate from the TRACE-estimated trajectory and the constant detected range rate at the detection time (vertical line).

Figure 6: Plots depicting the 12 segments of detection and the corresponding time history of differences of range-rate values for each GPS PRN detected. The time history is of the range-rate difference between the predicted range rate from the TRACE-estimated trajectory and the constant detected range rate at the detection time (vertical line).

Excluding Segment 12, which was below the MEO constellation altitude, Segment 6 has more detected range rates than that of the other segments. On closer inspection of this segment, and using IGS precise orbit data products, it appears that the minimum RMSE of the range rates from the detected PRNs is off from the reported detection time by several seconds (see Figure 7). Investigating regions along the Falcon Gold TRACE-estimated trajectory and assuming a mismatch in time tagging results in a location (in Earth-centered Earth-fixed coordinates) with a lower RMSE for the predicted range rates compared to detected range rates.

Figure 7: Range-rate difference between the predicted range rate from the TRACE-estimated trajectory and the constant detected range rate at the detection time (left). A portion of the trajectory around Segment 6 with the TRACE-estimated location at the time of detection (red) and the location with the minimum RMSE of range rate (black).

Figure 7: Range-rate difference between the predicted range rate from the TRACE-estimated trajectory and the constant detected range rate at the detection time (left). A portion of the trajectory around Segment 6 with the TRACE-estimated location at the time of detection (red) and the location with the minimum RMSE of range rate (black).

To determine the search space for the DPE, we first determine the location along the original TRACE-estimated trajectory with the minimum RMSE of range rates for each segment. Then we propagate the state (position and velocity) at the minimum location to the Segment 6 time stamp. If the time segment has more than three observed range rates (Segment 6 and Segment 12), we perform a least squares velocity estimate using the range-rate measurements, using the locations along the trajectory and selecting the location with the smallest RMSE. Then, for Segment 12, the position and velocity obtained from least squares is propagated backwards in time to the Segment 6 timestamp. All of these points along the trajectory as well as the original point from the TRACE estimated trajectory are used in a way similar to the method of using a sigma point filter. Specifically, the mean and covariance of the position and velocity values are used to sample a Gaussian distribution. This distribution will serve as the first iteration of the candidate locations for DPE. There were a total of three iteration steps and at each iteration the range of clock bias values over which to search was refined from a spacing of 1,000 meters, 100 meters, and 10 meters. Also on the third iteration, the sampled Gaussian distribution was resampled with 1,000 times the covariance matrix values in the directions perpendicular to the direction to Earth. This was done to gain better insight into the GPS satellites that were contributing to the DPE solution.

RESULTS

Figure 8 shows the correlation peaks for each of the signals reported to be detected using a 15-millisecond non-coherent integration time within the DPE acquisition. Satellite PRNs 4, 16 and 19 are clearly detected. Satellite PRN 29 is less obviously detected, but the maximum correlation value is associated with the reported detected frequency. However, this is the peak detected frequency only if the Doppler search band is narrowly selected around the reported detected frequency. Similarly, while the peak code delay shows a clear acquisition peak for PRNs 4, 16 and 19, for PRN 29 the peak value for code delay is more ambiguous with many peaks of similar magnitude of correlation power. Figure 8 depicts the regions around the max peak correlation chip delay.

Figure 8: Acquisition peak in frequency (left) and time (right) for PRN 4, 16, 19 and 29. The correlograms are centered on the frequency predicted from the range rate calculated along the trajectory.

Figure 8: Acquisition peak in frequency (left) and time (right) for PRN 4, 16, 19 and 29. The correlograms are centered on the frequency predicted from the range rate calculated along the trajectory.

For the first iteration of DPE, the peak coordinated acquisition values for PRN 16 and PRN 4 are chosen for the solution space. From the corresponding spatial correlogram, the chosen candidate solution is roughly 44 kilometers away from the original position estimated using TRACE.
For the second iteration of DPE, the clock bias is refined to search over a 100-meter spacing. The peak values for PRN 16 and PRN 19 are chosen for the solution space and the chosen candidate solution is roughly 38 kilometers away from the original position estimated using TRACE.
For the final iteration, Figures 9 and 10 depict the solutions with the 10-meter clock bias spacing and the approach of spreading the search space over the dimension perpendicular to the direction of Earth. Again, this was done to illustrate how the peak correlations appear to be drawing close to a single intersection location. However, the results fall short of the type of results shown in the spatial correlogram previously depicted in Figure 2 when many satellite signals were detected.

Figure 9: Acquisition peaks plotted in the time domain with the candidate location chosen at the location of the vertical black line for the detected PRNs for the third iteration of the DPE method.

Figure 9: Acquisition peaks plotted in the time domain with the candidate location chosen at the location of the vertical black line for the detected PRNs for the third iteration of the DPE method.

Figure 10: Spatial correlogram with the candidate location chosen at the location of the black circle for the detected PRNs for the third iteration of DPE method. The original TRACE-estimated position is indicated by a red circle. The two positions are approximately 28 kilometers apart.

Figure 10: Spatial correlogram with the candidate location chosen at the location of the black circle for the detected PRNs for the third iteration of DPE method. The original TRACE-estimated position is indicated by a red circle. The two positions are approximately 28 kilometers apart.

A similar iterative method was followed using not just the four detected PRNs, but any satellite that was predicted to be visible with the relaxed criteria allowing for visibility based on receiving signals from the first and second sidelobes of the antenna. This is predicted using a larger 40° away from the GPS antenna boresight criterion. The final spatial correlogram (Figure 11) shows similar results to the intersections shown in Figure 10. However, there is potentially another PRN shown with a peak contribution near the original intersection point. These results are somewhat inconclusive and will need to be investigated further.

Figure 11: Spatial correlogram with the candidate location chosen at the location of the black circle for the detected PRNs for the third iteration of DPE method using additional satellites. The original TRACE-estimated position is indicated by a red circle. The two positions are approximately 24 kilometers apart.

Figure 11: Spatial correlogram with the candidate location chosen at the location of the black circle for the detected PRNs for the third iteration of DPE method using additional satellites. The original TRACE-estimated position is indicated by a red circle. The two positions are approximately 24 kilometers apart.

CONCLUSIONS AND FUTURE WORK

Our research investigated the DPE approach of positioning beyond the GNSS constellations using real data. We will further investigate ways to parameterize our estimated orbit for use within a DPE algorithm in conjunction with other orbit determination techniques (such as filtering) as our results were promising but inconclusive. Some additional methods that may aid in this research include investigating the use of precise SP3 orbit files over the navigation message currently used (BRDC) within our DPE approach. Also, some additional work will need to be completed in determining the possibility of time tagging issues that could result in discrepancies and formulating additional methods related to visibility prediction that could aid in partitioning the search space. Additionally, we plan to investigate other segments where few signals were detected, but where more satellites are predicted to be visible (a better test of DPE). Finally, using full 40-millisecond data segments rather than the 15 milliseconds used to date may provide the additional signal strength needed to give more conclusive results.

ACKNOWLEDGMENTS

This article is based on the paper “Direct Positioning Estimation Beyond the Constellation Using Falcon Gold Data Collected on Highly Elliptical Orbit” presented at ION ITM 2023, the 2023 International Technical Meeting of the Institute of Navigation, Long Beach, California, January 23–26, 2023.


KIRSTEN STRANDJORD is an assistant professor in the Aerospace Engineering Department at the University of Minnesota. She received her Ph.D. in aerospace engineering sciences from the University of Colorado Boulder.

FAITH CORNISH is a graduate student in the Aerospace Engineering Department at the University of Minnesota.

<p>The post Far Out: Positioning above the GPS constellation first appeared on GPS World.</p>

]]>
ComNav device aids in skyscraper completion https://www.gpsworld.com/comnav-device-aids-in-skyscraper-completion/ Tue, 01 Aug 2023 12:15:57 +0000 https://www.gpsworld.com/?p=103250 Four T300’s from ComNav Technology have been used as active control GNSS points on the top of Sweden's tallest building, Karlatornet, during its construction to deliver 3D coordinates to total stations.

<p>The post ComNav device aids in skyscraper completion first appeared on GPS World.</p>

]]>
Image: ComNav Technology

Image: ComNav Technology

Four T300’s from ComNav Technology have been used as active control GNSS points on the top of Sweden’s tallest building, Karlatornet, during its construction to deliver 3D coordinates to total stations and one was used as a base station. The building is set to be complete this month.

The T300 is a receiver with radio frequency, a baseband chip built in, and a unique quantum-real-time kinematic (RTK) algorithm. It supports full constellation systems including BDS-2, BDS-3, GPS, GLONASS, Galileo, QZSS and NavIC.

The receiver is designed for demanding surveying tasks, features tilt compensation, 4G/Wi-Fi connection, 8-GB internal memory and an easy survey workflow with Android-based Survey Master Software. It is designed to make collecting accurate data easy and fast, whether done by a beginner or experienced professional surveyor, the company said.

<p>The post ComNav device aids in skyscraper completion first appeared on GPS World.</p>

]]>
PNT by Other Means: Oxford Technical Solutions https://www.gpsworld.com/pnt-by-other-means-oxford-technical-solutions/ Wed, 05 Jul 2023 16:06:50 +0000 https://www.gpsworld.com/?p=102909 An exclusive interview with Paris Austin, Head of Product – New Technology, Oxford Technical Solutions. For more exclusive interviews from […]

<p>The post PNT by Other Means: Oxford Technical Solutions first appeared on GPS World.</p>

]]>
An exclusive interview with Paris Austin, Head of Product – New Technology, Oxford Technical Solutions. For more exclusive interviews from this cover story, click here.


What are your title and role?

I’m the head of product for core technology at OxTS. My role now is focused on R&D innovation. So, the research side, developing prototypes and taking new technology to market effectively. One of the key things we’re examining is GNSS-denied navigation: how we can improve our inertial navigation system via other aiding sources and what other aiding sensors can complement the IMU or inertial measurement unit to give you good navigation in all environments. Use GNSS when it’s good, don’t rely on it when it’s bad or completely absent.

We rely increasingly on GNSS but are also increasingly aware of its weaknesses and vulnerabilities. What do you see as the main challenges?

Excessive reliance on anything leads to people exploiting it, which is where the spoofing, the jamming, and the intentional denial come in. We all rely on technology nowadays to do all our menial tasks; then, if we lose the technology, we don’t have the skills to do the task ourselves and we’re in trouble. Reliance on a mass global scale on GNSS is a good and a bad thing. It is good for technology because costs come down. Access to GNSS data is increasingly easy and devices that use it are increasingly cost-effective. But if your commercial, industrial, or military operations rely too much on that one sensor, they can fall over. That’s where complementary PNT comes in: if you can put your eggs in other baskets, so that you have that resilience or redundancy, then you can continue your operation — be it survey, automotive or industrial — even if GNSS falls or is intermittently unavailable or unavailable for a long period of time.

However, you can fully replace a GNSS only with another GNSS.

You cannot replace GNSS with anything that has all the pros and none of the cons. You could use something like lidar or an IMU to navigate relative to where you started. However, you would not know where you are in the world without reference to a map, which would have been made with respect to GNSS global coordinates. The best thing you can do is use things with GNSS to plug the gaps or rely less on it periodically in the sense of having multiple updates per second and be able to at least start with a global reference, then navigate relative to that for a period of time and then get another global update. Then you can navigate in between either via dead reckoning or local infrastructure that is being referenced with respect to the global frame. That way, you can transition between GNSS and localized aiding without any dropouts in your operation or your functionality without relying on completely clean GNSS data all the time.

As you say, you can’t replace it. If you do claim to be breaking free from GNSS you’re really playing a different game and just describing it in a way that sounds as good as GNSS, but in reality you’re saying, “I can navigate in this building but I don’t know where this building is” until you start saying, “Well, I’ve referenced it with respect to a survey point that used a GNSS survey pole.” At that point, you’re not breaking free from GNSS, you’re just using it differently.

INS-GNSS integration has been around for a long time and the two technologies are natural partners because each one compensates for the other’s weaknesses. What have been some of the key recent developments in that integration?

The addition of new GNSS constellations has helped a lot because you need four satellites for a position or time lock and six satellites to get RTK. What previously were 12 to 14 satellites from GPS and GLONASS visible at any one time have doubled with the addition of Galileo and BeiDou. So, your requirement for six satellites at any one time has become a much more reasonable proposition in terms of maintaining that position lock in the first place. Meanwhile, IMU sensors have been coming down in price. So, you can make a more cost-effective IMU than ever, or you can spend the same and get a much better sensor than you ever could before. Your period between the GNSS updates is also less noisy and you have less random walk and more stability.

With less drift you can also go for longer periods without re-initializing your IMU.

Yeah, exactly. Your dead reckoning period can go longer, while still taking advantage of tight coupling wherein you use the ambiguity area of the IMU to reduce the search area for the satellites. So, a better IMU means that you can use GNSS more readily when you go under a bridge or go through a tunnel. You can lock on to satellites quicker again because of the advancements that have been made with the IMU technology.

What have been some of the key advances in IMU technology in the last five or ten years?

With GNSS receivers, the market has become more competitive, there are now more options than ever before. People being disruptive in the space has allowed us to use lower cost sensors for the same performance or mix and match gyroscopes and accelerometers to get the best IMU complementary level. Previously, you may have had an accelerometer that far outweighed the performance level of the gyroscope. So, you would have very good velocity drift over time. But if you’re heading drifts, you still end up in the wrong place when you haven’t had GNSS for a while.

So, that’s allowed us to pick a much more complementary combination of sensors and producing an IMU that we manufacture and calibrate ourselves, while using off-the-shelf gyroscopes and accelerometers. That allows us to make an IMU that is effectively not bottlenecked in any one major area. I think previously, with IMUs, you took what you could get and some of that technology was further ahead than other. So, it’s a good thing for us because the sensors that we’re getting do not cause single-source bottlenecks and we can achieve higher level of performance than we ever could, without having to significantly increase our prices.

The way we’ve always seen it, either you add features or performance level and maintain the price, because the technology is maturing over time, or you disruptively lower your price with the same technology. On occasion, we have done that in the survey space. That’s where the performance level requirements are far tighter because people are moving from static survey using GNSS, where they’re used to millimeter-level surveys, into the mobile mapping space, where they still rely entirely on RTK GNSS.

However, they also rely on high accuracy heading, pitch, and roll to georeference points from a lidar scan at a distance instead of only exactly where they are. Where new IMU technology has helped us is to get the better heading, pitch, and roll performance for georeferencing as well as reducing the drift while we dead reckon in a GNSS outage.

What is the typical performance of IMU accelerometers and gyros these days?

It boils down to what it gives us in terms of position drift or heading, pitch, and roll drift over 60 seconds. Real-time heading, pitch, and roll is heavily affected by gyroscope performance.

How much more do you have to pay to get that increase in performance?

There are definitely diminishing returns. When you look at some of the Applanix systems that have very good post-processing performance in terms of drift, you’re talking about something like $80,000 for a mobile mapping survey system that is maybe 50% better on roll and pitch in normal conditions, let alone an outage, vs. $30,000 to $40,000 for our top system, which is 0.03 roll and pitch, for example. If you go down to 0.015, you can pay double for the INS. Similarly, if you go the other way, and you go cheaper, you can probably get a .1 degree roll and pitch system for $1,000.

So, it’s a very steep curve. The entry level systems are very disruptively low priced now but given the requirements for certain applications —particularly survey — that .1 degree means that you can never achieve centimeter-level point cloud georeferencing. And that’s where people are still justifying spending $80,000 or more on the INS. They also spend similar levels on their RIEGL lidar scanners and other profilers. So, it’s complementary to the quality of the other sensors. However, it really doesn’t make sense to spend $1,000s on your INS and then $80,000 on your lidar, because you’re going to be bottlenecking the point cloud that you get out of it at the end anyway.

The same goes for autonomous vehicles, where people are now spending sub-$1,000 on their lidar or their camera, and they don’t want to spend $30,000 to $40,000 on their INS for a production level, autonomous vehicle. So, there needs to be that similar complementary pricing for sensors in that space, where you can offer an INS for hundreds of dollars, for example, that performs maybe only a percentage less than INSs do today.

For an autonomous vehicle to stay in lane, it still needs these building blocks to be high accuracy, because they’ve only got 10s of centimeters with which to play. However, they are doing it from the point of view that they don’t care where they are in the global frame at that moment in time to stay in their lane, only where the lane markings are. However, they will care where they are in the global frame when they come to navigate off of a map that someone else has made and they’re looking for features within the map, for such things as traffic signs, stoplights, and things that are out of sight or occluded by traffic, so that they know if they’re approaching them and the camera is just blocked at that time. That’s where the global georeferencing comes in and where GNSS remains critical effectively. Right?

It ranges price-wise. The top-end systems — Applanix and NovAtel — in the open road navigation sense, are not orders of magnitude better but you do end up paying double very quickly. If you look at the datasheet, positioning in open sky conditions is identical between a £1,000 power system and an £80,000 pound system. The differences all come in those drifts specs, or the heading, pitch, and roll specs that are being achieved, because the value really comes from the IMU being used at that point.

Is most of the quality difference between these devices due to better machining, smarter electronics, or improved post-processing?

Any one of them on their own will not get you a good navigation solution. Fundamentally, you can have a good real-time GNSS-only system that will work at a centimeter level if you just use, say, a u-blox receiver, which is less than $100. Adding a low-cost IMU can fill some gaps, but not particularly intelligently and you’ll get jumps and drop-outs or unrecoverable navigation. That’s when the algorithms come in to play in terms of intelligent filtering of bad data and when to fall back on one solution versus the other and when to blend the two.

I was asking specifically within INS. When you’re talking about a $1,000 INS versus an $80,000 INS, how much of the improvement in performance is due to manufacturing, how much of it is due to smart electronics, and how much of it is due to algorithms or post processing?

Most of it is probably down to the raw sensor quality and then the calibration of the sensors. An IMU calibration is important, in terms of compensating for bias and scale factor errors, but also for the misaligned angle of the sensors. So, you need to make sure that your accelerometers and your gyros are all mounted exactly orthogonal to each other. A $1,000 sensor is very unlikely to be calibrated to the same level as an $80,000 one. That’s probably because you’d get 10% more out of calibrating the $1,000 one but you might get three times the performance out of calibrating the $80,000 one. So, you have a lot more to get out of a high-end system in terms of unlocking the potential whereas the low-end sensors are probably already giving 80% to 90% of their potential out of the box, with no calibration at all.

You affect such things as warmup time. A well-calibrated system will already be modeled accurately almost as soon as you power it on. If you don’t calibrate the system, you can still have a Kalman filter or something running in real time that can model the errors live. But it will mean that you won’t be at spec level performance as soon as you power up. When does it matter to you that you get the best data? Is it the instant you power up because you’re navigating an autonomous vehicle out of the parking garage? Or do you have 10 minutes before you need to take the data and use it for anything, and therefore you can take those 10 minutes to model the sensors live?

You might save money on the electronics budget but spend it to pay the driver to do the warm-up procedure. You can reallocate where you spend your money. If you’re rolling out a fleet of 100 vehicles, though, you probably don’t want to have to have 100 drivers that are trained to do a warm-up procedure. So, you would spend the money on the electronics to have an INS that does not require a warm-up. That is an option that you can go with now. If you spend the extra you can get away from the warm-up procedure requirements, because things have been modeled during calibration instead of in real time.

Your website focuses on three areas: automotive, autonomy, and surveying and mapping. Why those and what might be next in terms of markets or end user applications?

Automotive is probably the bread-and-butter part of OxTS. For a long time, automotive users were looking for a test and validation device that could give them their ground truth data to validate onboard vehicle sensors. We were very much the golden truth sensor, making sure that the sensors they were putting into the production vehicles were fit for purpose and safe. So, if they claimed it had autonomous emergency braking, they used our sensor to say how far away it was from the target — for example, a pedestrian — when it made the vehicle stop. Did it break with the appropriate distance between them? They had a unit in each vehicle and got centimeter accuracy between them. That was very easy to do with GNSS. Because on a proving ground for automotive users, they always have RTK.

Now the automotive world is moving into the urban environments and doing more open-road testing. So, the need for complementary PNT is more on their mind than ever. They are looking for a technology from us and our competitors that allows them to keep doing those tests that they did on the proving ground, but in real world scenarios. They may collect 1,000 hours of raw data and then only have an autonomous emergency breaking (AEB) event kick in three times in those 1,000 hours. They will then look at the OxTS data at that time and say something like, “Did the dashboard light come on and then did the brake kick in at the required time to avoid the collision?”

So, they rely on the INS data to be accurate all the time. It cannot be that in 1,000 hours, if you get those three events, two of them do not meet the accuracy requirements to be your ground truth sensor. Because then they would basically say, well, we don’t know whether the AV kicks in at the right time on the open road. They would have to fall back to the proving ground testing to have any confidence. So, that’s where the automotive world is looking to use an INS to reference its onboard sensors.

In autonomy and survey, on the other hand, the INS is used actively to feed another sensor to either georeference or, in the case of autonomy, actively navigate the vehicle. So, that data being accurate is critical because an autonomous vehicle without accurate navigation cannot move effectively and would have to revert to manual operation. There’s a lot to do with localization and perception and avoidance of obstructions and things like that.

Timing synchronization is critical. People haven’t solved a way to synchronize multiple vehicles without using GNSS and PPS. Some people are using PTP to synchronize, but they’ll often have a GNSS receiver at the heart of it with the nanosecond-accurate time to be the actual synchronization time. And then everything else is a slave PTP device that operates off of that. So, if we did not give accurate timing, position and orientation, there is basically nothing that that vehicle could do to navigate other than navigating relative to where it was when it last had accurate INS time.

Often, these vehicles will enter a kind of limp mode or stop completely and require user operation to get it to the next stage. It’s where you see the street drone-type small robots now, which will stop if a pedestrian walks in front of it, obviously, because it is a safety requirement. But also, if it doesn’t know where it is, like a Roomba operating inside, it cannot localize with respect to landmarks that it has in its map, it will just effectively try to re-localize off of random movements until it can orient itself. In that scenario, an INS or an IMU can help you reduce the number of times that you’re losing absolute localization. Where the autonomy side of things comes in for us is if we can offer the navigation quality, more of the time and to a high accuracy but for acceptable cost, then the sensor is a viable one to be put into the autonomous vehicle.

In autonomy, our active and potential customers are looking to do everything for a very, very low cost base, because they know that they’re trying to reach consumers with these products rather than businesses. So, their value box is entirely within the algorithms that they’re selling. They’re trying to offer scalable solutions that could roll out to thousands or millions of vehicles around the world, with their algorithms at the center of them. That localization and perception stuff is where you see companies such as Nvidia getting involved, because they want to be at the heart of it. Then they say that they can support any sensor while not being tied to any one of them. However, their algorithm is always going to be there at the heart of it. They will have GNSS receivers they support, they will have IMUs, they will have cameras, lidar, and radar and all the other kinds of possible aiding sensors. But they will say that their algorithm will still function if you have any number of those being fed in at any time.

So, autonomy relates to automotive in a sense, because you have autonomous passenger vehicles, but you also have autonomous heavy industry and autonomous survey, where people are flying drones autonomously or operating Spot autonomous dog robots, things like that, which can still be a survey application where you don’t want to have a human in the loop but you still need to navigate precisely. Someone may be sending a Spot dog robot into a deactivated nuclear reactor where they don’t want to send a human, but they still need to get to a very specific point within that power station and report back. They need to avoid obstructions, they need to georeference data they collect, and then take a reading from a specific object or sensor that’s inside and come back out safely. So, accurate navigation throughout the whole process is very important.

I understand the role of OxTS in testing and development. However, are any of your systems going to be in any production vehicles?

Many of the companies that are working on autonomous passenger vehicles are realizing that they are still a long, long way away.

What about your presence in the auto market more broadly?

They are used, but as separate components. You will have GNSS, IMU, radar, cameras, and lidar but the localization and perception will all be done by the OEM or by a tier one supplier to the OEM. So, they don’t want a third-party solution that is giving them a guarantee of their position because it’s a black box. They need to have traceability and complete insight as to what each sensor is saying so that they can build in redundancy and bring the vehicle safely to a stop if one of those systems is reporting poor data. For production vehicles, we are very much used as a validation tool in the development stage, but in terms of producing the production vehicle, they need to have that visibility of the inner workings of the system. Most INSs will not give you that insight as to how they arrived at their navigation output, because that is proprietary information. As a result, many automotive customers are looking to do that themselves. However, as I said, they’re realizing that it’s very difficult, and they’re quite a long way from navigating anywhere.

Therefore, currently no OxTS products are in production vehicles.

Not for passenger autonomy. However, they are used in some of the other autonomous spaces, such as heavy industry, that take place in private, fixed spaces such as mines, quarries, and ports where there is little interaction with the public. That is not only because the vehicle price point is much higher for some of these mining vehicles and heavy industry vehicles, but also because you don’t have to have your algorithm and perception capability deal with vehicles that are not autonomous or are driven by drivers that are not trained on health and safety in the area.

In these private spaces, you can tune your systems to work with each other without having to worry about the pedestrians and the random vehicles for which you’ve not accounted in your perception algorithms. That’s where the divide comes at the moment. If there are untrained people in the area, then there’s a lot more to accommodate and that makes the proposition much more difficult.

Are you at liberty to discuss any recent end user success story with your products?

The Ordnance Survey in the UK has been using our INS to create 3D maps on which they can then use semantic segmentation to classify features within the environment and pull out all the relevant features within a survey of a city, for example. They’re blending the raw data from OxTS lidar and map data that they have to create high accuracy 3D maps that can be used to add that third dimension to the high accuracy 2D maps that have been their value proposition for the past few decades. They can say, “here are all the trees in the environment” or all the traffic signs or buildings or that kind of thing that you’re going to see in Google Earth imagery. They start to reach the realms of high accuracy map data. They’re looking to sell that map data to commercial entities to monetize it and use it on a nationwide level and then on a global level.

If you have that map data, there’s a lot that you can do with it, in terms of intelligent decision making about routing a vehicle, or many other things, such as monitoring the heat output of buildings. In the EU, there are many directives around such things as carbon emissions. If you’re being more efficient with the heat output of your buildings, you can effectively say that you’re hitting your CO2 emissions reduction goals, by running whatever initiative to insulate buildings better and things like that. It always starts with, “Where was I when I saw this object or this building?” Therefore, I can georeference that building, I can color it by thermal imaging and things like that.

They can start to produce 3D imagery that is colored by thermal output, they can do it by any other number of sensors as well, that can give them meta data that can allow them to sell the data to someone else. It makes what was previously a very big job very efficient. So, they can drive hundreds of kilometers in a day where previously it was a static survey that was done over the course of weeks on foot. It’s also changing the efficiency metric that they can deliver to their end users.

Thank you very much!

<p>The post PNT by Other Means: Oxford Technical Solutions first appeared on GPS World.</p>

]]>
Online Exclusive: PNT by Other Means https://www.gpsworld.com/forrefcomppnt/ Wed, 05 Jul 2023 05:14:34 +0000 https://www.gpsworld.com/?p=102761 Advanced industrial societies are increasingly reliant on the fantastic capabilities of GNSS and, therefore, increasingly vulnerable to their weaknesses.

<p>The post Online Exclusive: PNT by Other Means first appeared on GPS World.</p>

]]>
Image: Safran Federal Systems

Image: Safran Federal Systems

Due to the limited space available in print, I was able to use only used a small portion of the interviews I conducted for our July cover story. For full transcripts of them (totaling more than 12,000 words) see below:

  • Safran Federal Systems (formerly Orolia Defense & Security) makes the VersaPNT, which fuses every available PNT source — including GNSS, inertial, and vision-based sensors and odometry. I spoke with spoke with Garrett Payne, Navigation Engineer.
  • Xona Space Systems is developing a PNT constellation consisting of 300 low-Earth orbit (LEO) satellites. It expects its service, called PULSAR, to provide all the services that legacy GNSS provide and more. I spoke with Jaime Jaramillo, Director of Commercial Services.
  • Spirent Federal Systems and Spirent Communications are helping Xona develop its system by providing simulation and testing. I spoke with Paul Crampton, Senior Solutions Architect, Spirent Federal Systems as well as Jan Ackermann, Director, Product Line Management and Adam Price, Vice President – PNT Simulation at Spirent Communications.
  • Oxford Technical Solutions develops navigation using inertial systems. I spoke with Paris Austin, Head of Product – New Technology.
  • Satelles has developed Satellite Time and Location (STL), a PNT system that piggybacks on the Iridium low-Earth orbit (LEO) satellites. It can be used as a standalone solution where GNSS signals will not reach, such as indoors, or are otherwise unavailable. I spoke with Dr. Michael O’Connor, CEO.
  • Locata has developed an alternative PNT (A-PNT) system that is completely independent from GNSS and is based on a network of local ground‐based transmitters called LocataLites. I spoke with Nunzio Gambale, founder, chairman, and CEO.

<p>The post Online Exclusive: PNT by Other Means first appeared on GPS World.</p>

]]>