Software-defined radio: To infinity and beyond

History

October 17, 2016

It’s hard to believe that the term “software defined radio” (SDR) has been around for about 30 years. That’s a long time in the tech world, but SDR is still a common topic of discussion and carries more than its share of misconceptions. The definition of SDR – according to the Wireless Innovation Forum (formerly SDR Forum) – is “a radio in which some or all physical layer functions are software defined.” The term is really focused on the physical (PHY) layer processing of the waveform and is not related to the radio frequency (RF) front end, which is a common misconception. Radios with a wideband tunable RF front end with dynamic spectrum access capabilities are called cognitive radio (CR). Cognitive radio is defined as radio in which communication systems are aware of their internal state and environment, such as location and spectrum usage at that location. Software-defined radio: To infinity and beyond

After so many years, SDR is now so industry dominant—the standard implementation for radios, from military tactical radios to cellular phones—that it’s almost a given that radio is SDR. There will continue to be innovations in semiconductor and software technologies that will continue to lead to higher development productivity and more cost-effective SDRs, so there really is no end in sight for SDRs. See the article : Sig Sauer acquires General Robotics – Military Embedded Systems. These factors mean that SDR is indeed a solved problem and radios are now becoming frequency flexible and evolving towards CR.

A demonstration that SDR is the de facto industry standard is shown in Figure 1. Closest to the center, the dark blue section is representative of the first set of markets that moved from hardware radio architectures to SDR architectures, whether they used the term SDR or not. These markets include signals intelligence (SIGINT), electronic warfare, test and measurement, public safety communications, spectrum surveillance, and military communications (MILCOM). Some of these markets used hard-wired application-specific integrated circuits (ASICs), while some already used programmable digital signal processors (DSPs).

The technology drivers that really drove the move to SDR in these markets were the advent of RFICs from companies like Analog Devices and cost-effective DSP-intensive FPGAs from companies like Xilinx. These two technology drivers combined to meet the multibillion-dollar needs of the military tactical radio market, creating something of a “market wave” where the market had a huge impact on the development of SDR technology far beyond the MILCOM market. JTRS [Joint Tactical Radio System] the program funded the development and production of both SDR and CR technology for military radios, which created a strong ecosystem of suppliers, including semiconductor, instrumentation and software companies. In terms of tools, SDR requires waveforms to be as portable as possible between different hardware platforms, which has led to tools like SCA [Software Communications Architecture] Core Framework, as well as better programming tools from electronic design automation (EDA) and semiconductor companies.

Figure 1: How successive generations of SDRs have dominated the radio industry and will continue to do so.






Figure 1

Advances in RFIC, field-programmable gate arrays (FPGAs), and EDA tools were significant factors in enabling the second generation of SDRs driven by 4G LTE infrastructure. Virtually all LTE eNBs (eNodeB or base station) are developed with RFIC and FPGA. Some of the larger infrastructure vendors will eventually move to ASICs, but even then, baseband ASICs are largely programmable because they use processors connected to hardened blocks called hardware accelerators for compute-intensive functions such as turbo decoding, which usually exceeds the performance or power limitations of processors.

The next market wave, shown in the third generation, occurred when 4G LTE phones moved sequentially to SDR architectures. This change was enabled by low-power, high-performance DSP cores optimized for phones offered by companies such as Ceva, Tensilica and Qualcomm. Like baseband ASICs for infrastructure, these cores will be integrated into application-specific standard products (ASSPs), or ASICs for much of the PHY processing, coupled with hardware accelerators. Once this shift occurred, SDRs increased in volume and range by orders of magnitude to become the de facto industry standard for radios.

The obvious question: What’s next for SDR and CR? As much as 4G phone volume drove SDR, the prospects of 5G, IoT (Internet of Things) and sensor networks promise to increase SDR volume again by another order of magnitude. What will be the technology engine driving SDR to these lofty heights? Given that previous engines were innovations in analog and digital technology, it follows that the next technology engine will be the combination of analog and digital in one monolithic chip to reduce cost and SWaP [size, weight, and power). For infrastructure, this driver could be FPGAs with integrated analog-to digital converters (ADCs) and digital-to-analog converters (DACs). For handsets and sensors, this could be application processors, also with integrated ADCs and DACs. Don’t forget software and tools, which is the whole point of SDR, after all. In order to enable the development of these chips, as well as the waveforms and application software running on them, there will be a requirement for better system-level tools that can be used to design and debug across the analog and digital domains, as well as program-heterogeneous processors on a single chip, including general-purpose processors (GPPs), DSPs, graphics processing units (GPUs), and/or FPGA fabric.

With all this talk about the evolution of SDRs, it’s interesting to note that technology becoming more cost-effective has been a major driver in the adoption of SDR technology, enabling SDR to reach previously inaccessible markets such as handsets. This trend is not expected to go away, as high-volume markets are generally very price-sensitive.

Ettus Research, a National Instruments company, offers a super-heterodyne two-channel receiver daughtercard (Figure 2) called TwinRX. All previous Ettus Research RF daughtercards were direct-conversion architectures, which demodulate an RF carrier directly to baseband. Furthermore, the RFICs in Figure 1 that were a key technology driver for SDR used direct conversion; by eliminating the IF (intermediate frequency) stage, direct-conversion receivers could be smaller and lower-cost. This benefit usually came at a penalty of RF performance, however, including nonlinearity and poorer dynamic range. For this reason, super-heterodyne architectures are still common for SIGINT and direction finding (DF), where an increased ability to detect, monitor, and capture a signal of interest is critical. MES

 

Figure 2: Two super-heterodyne TwinRX daughtercards inside an Ettus USRP X310 SDR for four phase-aligned RX channels.






Figure2

 

 

Manuel Uhm is the director of marketing at Ettus Research, a National Instruments company. Manuel has business responsibility for the Ettus USRP, NI USRP, and BEEcube portfolios. Manuel is also the chair of the Board of Directors of the Wireless Innovation Forum (formerly the SDR Forum). He has served on the Board since 2003 in various technical, marketing, and financial roles. Manuel can be reached at [email protected].

Ettus Research, a national instrument company www.ettus.com

Featured companies

See the article :
The Spanish-based GMV takes responsibility for the integration of the GNSS /…

Comments are closed.