Archive for the ‘Technology News’ Category

Abstract: Wireless device range can be the pivotal make or break characteristic of a successful end product.  This paper will dig into the mystery and explore the mechanisms by which wireless range can be reduced or optimized through RF and antenna design. The discussion is relevant to board and system- level circuit and antenna design. The useful rule of thumb that every 1dB of additional RF loss reduces wireless range by 10% is presented.

Index Terms— Wi-Fi, Bluetooth, BLE, Zigbee, RFID, GSM, GPS, MBAN, HBAN, UWB, CDMA, Chip Antenna, Circuit Board Antenna, Wireless Range Reduction, Wireless Range Optimization, Radio Module, 802.11 and  802.15.4


Any RF engineer who has optimized RF or microwave system hardware in the lab will agree that squeezing out the last 1 or 2dB from a design can be the most challenging aspect. After reading this paper you may better appreciate the value of such rigor. This is where the rubber meets the road for applying the art and science of RF design to the development of wireless products. At this point the product requirements may be defined, the theoretical path loss calculations may be complete and you want to ensure execution of the hardware development goes smoothly. Or, the product may be designed and prototypes delivered and debugged, but questions are being asked regarding the wireless range or lack thereof. This article will help the reader understand quantitatively how much wireless range may be lost if the antenna tuning and match steps are neglected, there is more RF loss in the design than anticipated or a related aspect of the design is out of control.

Unintended Loss in the Design

There are many possible sources of insertion loss, mismatch loss and general degradation of antenna gain.  These are RF signal losses resulting from product design decisions and features.  Collectively we will refer to these as unintended losses and all can have identical impact, which is to reduce the range of wireless products. By referring to them as unintended losses, we mean that they are a consequence of poor RF layout or antenna design and were not factored into the link budget calculation, which can be used early in the design to predict the range of a wireless device.

The RF engineer can prevent these problems and their disastrous consequences by optimizing the performance critical aspects of the design before the prototypes are built, and continuing the optimization and performance assessment in the lab when the hardware is available. This is not a long and drawn out process. It is a matter of simply involving the right expertise with access to the proper design, simulation, and test and measurement tools at the right times.  The end result will be a product which provides the best possible wireless performance for your customers and shareholders, with predictable cost and schedule.

Common Sources of Unintended Loss

The contributions from all sources of unintended loss are cumulative, including the separate losses of each of the 2 radios participating in a wireless link. For example, if we have 2dB of mismatch loss and the antenna gain is degraded by 2dB due to the layout, the impact of 4dB must be considered.  If two such identical radios are communicating, then the total impact of 8dB must be considered.

Antenna Match

Antenna match refers to optimizing the impedance matching network classically located close to the antenna using a piece of test equipment called a RF Vector Network Analyser. The impedance matching network is typically composed of lumped element capacitors and/or inductors, which has values that must be chosen or transmission line stubs which must be trimmed. Once the impedance matching network is tuned based on precision laboratory measurement, subsequent product may be built using the values determined. The purpose of matching the antenna is to force it to resonate over the appropriate range of frequencies for the radio, and to couple as much energy as is possible between the 50 ohm antenna and transmit/receive circuitry.

Circuit Board Layout

If the antenna is mounted on or integrated into a circuit board, careful attention must be given to the layout and the Gerber files reviewed. Often times the antenna used is really only half of the antenna capability since the circuit board RF ground plane plays a key role in the antenna performance. Without the presence of the ground plane and proper control and checking of all the geometric positioning of the antenna and the matching and feed network, the design may be destined to provide poor wireless performance before it is fabricated. The board layout team must be given detailed guidance and instruction, including the positioning of vias critical to RF performance. Simulation tools as well as theoretical knowledge as to how signals behave on circuit boards are needed to get this part of the design right.

Integration of Antenna into Operating Environment

Your end product may use more than one circuit board or contain other large conductive objects such as shielded LAN or USB connectors, transformers or discrete wires and cables.  All of these can profoundly impact the performance of your antenna as can proximity to materials such as plastics and conductors. The typical use case should be evaluated, including accessories. Proximity to the human body must be considered if the device is handheld or body worn.  Integration of the antenna into the product enclosure refers to evaluating the entire product design with respect to the antenna(s), retuning the impedance matching network in the final assembled product since everything mentioned above can impact antenna performance. Tuning the board used for laboratory development is often different from the final product tuning!

Quantify Impact of Loss on Wireless Range

Free Space Path Loss

Once prototype hardware is built and the wireless link functioning in the lab, the easiest part of the link budget to modify is often the physical separation between the two radios. Technically, we are changing the free space path loss (FSPL). The FSPL gets smaller (less loss) when the radios are moved closer together and vice versa. Here is a handy version of the equation for FSPL:

Equation 1:

1508 HFE antenna eq01

The distance between the two radios is d (meters), and the frequency of interest is f (Hz).

If we plot path loss vs. separation distanced, the slope of the line is 20dB/decade or 6dB/octave for any range of separation distance d. Figure 1 shows the path loss in dB for 3 different commonly encountered frequencies and a single decade of distance d in meters from 100 to 1000 meters.

1508 HFE antenna fig01

Figure 1 • Path Loss Over 1 Decade of Frequency.

Loss Compensation by Range Reduction

If the RF design has unintended loss not accounted for in the link budget, without changing any other variable, we can move the two radios closer together (reduce separation distance d) until they can maintain a wireless radio link. The effect of moving the radios closer together is to compensate for unanticipated loss by reducing the free space path loss defined earlier with an equation. Through inspection of the graph or mathematical analysis of the equation, we determine an approximate rule of thumb that regardless of the source of the loss or separation distance,

Every 1dB of unanticipated loss

Reduces wireless range by 10%!

We are making a linear approximation to quantities plotted on logarithmic scales, and this approximation is reasonably accurate for the final 5dB of link budget power while investigating the maximum separation distance. For example, you expected 300 meter range but your antenna gain is 2dB low, the 2 dB translates into an approximate 20% loss of or wireless range so you measure a range of (300 meters)*(80%)=240 meters.  This is a range reduction of 60 meters.  If the range is 50% of what you expected, you are compensating for exactly 6dB of unintended loss.

Other Loss Compensation Techniques

Standard coping mechanisms include turning up the transmitter power to compensate for an underperforming RF design. This may appear to work well in the lab, however as we increase transmit power, we also increase the amplitude of spurious emissions and harmonics which often lead to failure when the FCC or ETSI compliance tests are performed. This is similar to stepping on the gas if you have a flat tire. You may move forward for a while, but you will get emissions that you weren’t counting on such as your tire flying apart. If you do not have timely access to RF and antenna engineering capabilities when you need it, Peak Gain Wireless is ready to help with the expertise and equipment to solve these types of problems the right way. We can prevent these problems if we are involved early in the design or define and solve the problem if hardware is already complete.


What does this all mean? Many factors impact the wireless link budget. Examples include antenna selection, design, impedance matching and final product integration. If an antenna has not been properly designed, tuned and optimized in the final product enclosure, it is not uncommon to have a total unintended loss of 2 to 6dB. Since the impact is 10% range reduction per dB loss, this translates into a 20% to 50% range reduction. These types of problems can often be predicted, understood and designed out through EM simulation or the knowledge and insight of an experienced RF engineer with access to the right tools.

About the Author:

Matthew Meiller is President and Principal RF engineer at Peak Gain Wireless, LLC, which provides wireless product design and development services. He has over 20 years of experience in industry. His team has expertise with LoRa, Bluetooth, Bluetooth Low Energy (BLE), WiFi, Zigbee, wireless sensors and other sub 6GHz ISM radios. Many of Peak Gain Wireless’s designs are low power RF running on disposable batteries or coin cells for app enabled connectivity. Services include system specification development, hardware, firmware, antenna design, assembly, and test and measurement. Peak Gain offers both full turnkey design services, or we can help your team succeed with the high risk RF parts of the design such as antenna design, tune and final integration into the end product. We support single and multiband antenna design and development including antennas for cellular M2M products. For more info: This email address is being protected from spambots. You need JavaScript enabled to view it..”>

Published in High Frequency Electronics


img by Mike McGregor of MildredDresselhaus 66

Mildred Dresselhaus. Electronics made from nanoscale tubes, wires, and sheets of carbon are coming, thanks to pioneering researcher.

Photo: Mike McGregor

Before silicon got its own valley,this mild-mannered element had to vanquish many other contenders to prove itself the premier semiconductor technology. It did so in the 1950s and 1960s. Today, carbon is poised at a similar crossroads, with carbon-based technologies on the verge of transforming computing and boosting battery-storage capacities. Already, researchers have used these technologies to demonstrate paper-thin batteries, unbreakable touch screens, and terabit-speed wirelesscommunications. And on the farther horizon they envision such carbon-enabled wonders as space elevators,filters that can make seawater drinkable, bionic organs, and transplantable neurons.

Whatever miracles emerge from Carbon Valley, its carbon-tech titans will surely think fondly upon their field’s founding mother, Mildred Dresselhaus. This MIT professor of physics and engineering has, since the early 1960s, been laying the groundwork for networks of nanometer-scale carbon sheets, lattices, wires, and switches. Future engineers will turn these things, fabricated from carbon-based materials such as graphene, into the systems that will carry computing into its next era.

Now, after a half century of quiet work, she is accumulating accolades. This past November, in a ceremony at the White House, President Obama awarded her the Presidential Medal of Freedom, the U.S. government’s highest civilian honor. “Her influence is all around us, in the cars we drive, the energy we generate, the electronic devices that power our lives,” Obama said.

And this June, the IEEE will confer upon Dresselhaus its highest accolade, the IEEE Medal of Honor, for her “leadership and contributions across many fields of science and engineering.” She is the first female Medal of Honor recipient in the award’s nearly century-long history. (Before the IEEE’s formation, the Medal of Honor was presented by the Institute of Radio Engineers, which merged with the American Institute of Electrical Engineers in 1963 to form the IEEE.)

While Dresselhaus has blazed a path for researchers eager to exploit the magic of carbon computing, for most of her 84 years her own pathway has been anything but obvious. It was muddled by a world that had trouble accommodating a visionary engineering researcher who was also a caring and thoughtful mentor—as well as a mother of four (and today a grandmother of five).

The daughter of destitute Eastern European émigrés, a product of Great Depression and World War II–era New York City schools and their melting-pot culture, Dresselhaus (née Spiewak) as a child imagined that the only career open to her was that of schoolteacher. Even that was a bit of a stretch, given the time and place: The kids in her neighborhood and in her struggling primary school in the Bronx were mostly uninterested in their studies. But a mysterious force soon intervened. It was music.

Both her grandfather and great-grandfather served as town cantors in her father’s ancestral village of Dzialoszyce, Poland. So when her older brother, Irving, began playing the violin with uncommon grace at age 4, his gift wasn’t a complete surprise. Their parents secured a scholarship for him at New York City’s prestigious Greenwich House music school. And when Mildred was herself 4 or 5, she began studying music there, too. Although she stopped taking lessons at Greenwich House at 13, she has never abandoned her beloved violin. Dresselhaus still plays every day. “I had his hand-me-down violin,” she says. “I inherited all the things he left behind.”

And it was music that brought her into contact with more ambitious peers at the Greenwich House school. “It was obvious—education was important,” she says she realized not long after arriving at the school, in 1934 or ’35. “That was the most important lifelong thing I learned at the music school.”

She would probably have again followed her brother’s footsteps several years later, into the legendary Bronx High School of Science, but in those days Bronx Science was for boys only. So she set her sights on Hunter College High School, a New York City preparatory school for girls. While studying for her entrance exam, she discovered to her delight how easily math came to her. “My interest was inspired by studying—by myself and motivated by myself—math for the entrance exam to Hunter High,” she says.

At Hunter, she did so well in math and science that a poem in Dresselhaus’s senior yearbook pays tribute to her abilities: “Any equation she can solve / Every problem she can resolve / Mildred equals brains plus fun / In math and science, she’s second to none.” She went on to study at Hunter College, where, during her second year, another important force entered into her life.

Rosalyn Yalow’s [physics] course got me more into focusing on the science profession,” Dresselhaus says of the course she loved most at Hunter, which was taught by a medical physicist who would soon herself decamp for a research career and ultimately share the 1977 Nobel Prize in Physiology or Medicine. “That’s where I really got started. And Rosalyn insisted that I go to graduate school. She was a person who used to tell you what you were doing.”

Bolstered by Yalow’s effusive letters of recommendation, Dresselhaus was admitted to Radcliffe College in 1951 for graduate studies, an admission deferred so that she could attend the University of Cambridge on a Fulbright fellowship.

“Radcliffe had no [science] classes,” Dresselhaus explains. “The classes were at Harvard. But the exams were at Radcliffe. Women didn’t take their exams with the men. I had to take my exams by myself in a different room. It was a very complex situation and not a very comfortable one.”

During her first year at Harvard, Dresselhaus realized she was growing weary of the university and a bit restless. She’d discovered that the best place in the country to study physics was at the University of Chicago, home to Manhattan Project veteran and Nobel laureate Enrico Fermi. So in 1953, after finishing her master’s degree at Radcliffe, she was off to Illinois.

At Chicago, too, Dresselhaus was often the only woman in her classes. But the learning environment wasn’t as stifling. And it was at Chicago, she says, where she first really began to learn to think like a physicist, thanks to Fermi himself. Although by then famous for his role in the Manhattan Project, Fermi headed up a small and intimate physics department. In Dresselhaus’s incoming class in 1953, for instance, there were just 11 physics students.

Fermi was an early riser, as was Dresselhaus, and they lived along the same walking route to campus. So she, along with other students, faculty, and acolytes, timed their morning commute so they could stroll along with the legendary physicist.

“He was a methodical guy; he always did the same thing every day,” Dresselhaus says. On the morning walks, for example, Fermi would talk about the issues on his mind—sometimes related to the day’s lecture, sometimes not. And when Fermi gave his talks, he’d first hand the class copies of his notes. “He didn’t want people taking notes while he [lectured]. He wanted people to listen. He’d give you the notes. The lecture [notes] didn’t have many pages. Very concise.”

Fermi, who died in November 1954, during Dresselhaus’s second year at Chicago, still had an outsized influence on the young woman during her brief time in his orbit. “He developed in me the mind-set that we should be interested in everything,” she says, “because we never know where the next big breakthrough in science will occur.”

In the fall of 1955, Dresselhaus began her Ph.D. project, investigating the microwave properties of a superconductor in a magnetic field. The novel and hybrid nature of her investigation—involving low-temperature and solid-state physics, electrical engineering, and materials science—meant she couldn’t just order the parts for her research out of a catalog.

She found much of what she needed, though, under the university’s football stands, where more than a dozen years before, Fermi had led a group that had created the world’s first man-made nuclear-fission chain reaction. There, a mountain of surplus equipment was free for the taking. Repurposing a warehouse worth of materials, she grew superconducting wire for her experiments, built microwave equipment, and even produced liquid helium.

Dresselhaus says she’d developed that kind of gumption because her primary school teachers were terrible. “They were sufficiently bad that if you wanted to learn something, you taught yourself,” she says. “That was terrific training.”

While at Chicago, she met her future husband, fellow graduate student Gene Dresselhaus. They married in May 1958 and moved to Ithaca, N.Y., where she was a National Science Foundation postdoctoral fellow and Gene had an entry-level faculty position in the physics department at Cornell University. There Dresselhaus also met another celebrity scientist, albeit one whose great fame would come years later—Richard Feynman. At the time, Feynman was developing the equations that would become the quantum theory of electrodynamics.

“He gave a lecture now and then,” she says. “And if there’s a Feynman lecture, you go to it. It’s always interesting, looking at things you’ve heard about before but from a totally different perspective.”

Also in 1959, the Dresselhauses welcomed their first child, Marianne. And despite the stimulating Feynman lectures Dresselhaus occasionally attended, Cornell wasn’t exactly a female academic’s dream in those days. Early on, a faculty member told her point blank that no woman would ever be permitted to lecture to his engineering students.

img by Mike McGregor of Mildred Dresselhaus 23
Photo: Mike McGregor

So in 1960 the two Dresselhauses went to MIT’s Lincoln Laboratory. There she moved out of superconductors, her thesis topic, and began looking instead at magnetic and optical properties of graphite, bismuth, and other so-called semimetals. This field, she says, wasn’t popular or very competitive at the time, which gave her the latitude she needed to have four children (one daughter and three sons) through 1964. As a working mother, however, she encountered some bumps in her career progress.

One Lincoln Lab colleague,H. Eugene Stanley (now a professor of physics, chemistry, biomedical engineering, and physiology at Boston University) recalls the day after Dresselhaus delivered her youngest child, Eliot, in 1964.

“When she had her fourth kid,” recalls Stanley, “she brought him to work the day after he was born. She was there around noon or 1 o’clock with the baby in tow. But because Lincoln Lab was a government lab, you either had to have clearance or have a badge. They wouldn’t let the kid in. She was furious! I didn’t see her angry that often, but I saw her angry that day.”

Dresselhaus crossed from Lincoln Lab to parent institution MIT in 1967, accepting a visiting professorship in electrical engineering, a position that became permanent the following year. She added a joint appointment in physics in 1983.

“When I first came to MIT, the [physics] department was only interested in high-energy physics,” she says of a field that was then consumed with colliding subatomic particles at ever-higher energies. She adds that more quotidian fields of physics, from materials science to engineering physics, were on the back burner at the time. “It’s all totally different now.… There’s a big shortage of people [who] have a physics background and engineering also.”

On a snowy day in the middle of one of Cambridge’s harshest winters ever, Dresselhaus holds forth in her MIT office on her favorite topic. “Consider a simple sheet of carbon atoms, also known as graphite,” she begins.

She pulls down a well-worn ball-and-stick-model from atop one of her cabinets. “Carbon’s crystal structure is such that the in-plane force is the strongest in nature,” she says. “But across the plane it’s very weak. So it allows separation of layers very easily.”

A pencil’s graphite flakes off easily without disintegrating, and yet it can still cling to rough, fibrous surfaces like paper. Individual sheets of graphite, in other words, are as tough as diamond. But as a group they’re as flaky as phyllo dough.

Throughout the 1960s, 1970s, and 1980s, Dresselhaus and her graduate students investigated the properties of both graphite and carbon intercalation compounds—that is, sheets of graphite sandwiching individual bromine or potassium atoms, which were captured like olives between slices of bread. Her group also laid the foundation for the discovery and exploitation of nanotechnological wonder materials, such as the tiny carbon spheres known as buckminsterfullerenes, the cylindrical carbon pipes called nanotubes, and the single-atom-thick sheets of carbon called graphene.

Variations or combinations of these carbon structures could yield body armor stronger than Kevlar, ultrathin membranes with pores small enough to filter the salt from seawater, and even bionic implants that can give new hope to those with serious spinal-cord or organ damage. Used as electrodes in batteries or capacitors,graphene and nanotubes offer promise as a kind of ultimate energy storage system. Their charge capacities would exceed those of traditional batteries, and their charge times (on, say, an electric-vehicle battery) would be shorter than the time it takes to pump a tankful of gasoline.

And as a possible substrate for next-generation electronics, graphene has few competitors today. Its high conductivity (better than silver’s) and its single-atom thickness make robust, molecule-size graphene circuit components boasting terahertz computing speeds a tantalizing if far-future possibility. “Graphene is not going to replace silicon; it’s going to do different things,” Dresselhaus says.

Although well into her 80s, Dresselhaus is at her MIT office every day, including weekends and holidays, often as early as 6 a.m. Her enthusiasm for her work, which these days includes studying optical, electric, and vibrational properties of graphene, carbon nanotubes, and other nanomaterials seems undiminished. “I am excited by my present research and am not yet anxious to stop working,” she says, simply.

As she has for much of her MIT career, Dresselhaus also mentors young people, especially women starting careers in STEM. She has supervised the theses of more than 60 doctoral students and shepherded many more colleagues and associates through career transitions and inflection points.

“One time at MIT, she told me she was working with this great [Ph.D.] student named Shirley Ann Jackson,” says Laura Roth, Dresselhaus’s former colleague at Harvard and Lincoln Lab. “And now she’s president of Rensselaer Polytechnic Institute.” (Jackson has herself earned 52 honorary degrees and been called by Time magazine “perhaps the ultimate role model for women in science.”)

Says Gang Chen, head of the mechanical engineering department at MIT, “Four women from my own group…benefitted from Millie’s support during their stay at MIT. On several occasions, Millie volunteered to talk to my female students, giving them individual career advice.

“On one hand, it seems to be quite late for the first woman to receive the IEEE Medal of Honor,” Chen adds. “On the other hand, no one is more fitting than Millie, and she has set a truly high bar. I am sure Millie’s receiving this honor will inspire more women in IEEE to strike high.”

This article originally appeared in print as “The Queen of Carbon.”

About the Author

An IEEE Spectrum contributing editor, Mark Anderson has covered advances in carbon nanotechnology for us and other publications. In profiling the field’s doyenne, he found 84-year-old Mildred Dresselhaus’s seven-day-a-week work ethic a true inspiration. “I arrived at her MIT office on the morning of a snowstorm to do the interview,” he recalls, “and she was ready to go.”

By Eric Higham

Shortly after the conclusion of World War II, research began on a solid-state replacement for vacuum tube-based devices. The goal was to develop devices that would be more robust and more reliable than the tube devices in use at the time. Bell Labs started a group, led by William Shockley, to develop this solid-state alternative for amplification purposes. In the late 1940s, this group announced the invention of a point-contact transistor with gold contacts to a sliver of germanium (1). This was a start, but this device was also very fragile, relying on a spring to ensure contact of the gold probes to the germanium surface. Shockley was not satisfied with this solution and pushed on in his research. This work culminated in the theory of p-n junctions and minority carrier injection and what Shockley called the junction transistor (2). In 1951, Bell Labs fabricated a working germanium transistor (3).

The size, portability, performance and reliability of the germanium-based transistor fueled the growth of both military and commercial applications. Computers became one of the biggest early users of the germanium transistor, but the material had issues with temperature range and most notably, reverse leakage. The temperature range was a problem for military applications and the reverse leakage created serious issues for computer manufacturers. In 1954, the chemist who was instrumental in the germanium transistor fabrication process at Bell Labs announced that a team he was running at Texas Instruments had fabricated a silicon-based transistor (4).

DoD Funding

Silicon technology has become the preeminent high-volume process technology, but the US Department of Defense started funding efforts to develop the capabilities of III-V semiconductor technologies in the 1980s. These efforts started with the GaAs Pilot Line Program that sought to develop GaAs digital integrated circuits to compete with silicon (5). The program was successful, but the advantages of silicon were undeniable and the government shifted their funding to refining GaAs MESFET technology and developing high frequency GaAs amplifiers with the Microwave/Millimeter Wave Monolithic Integrated Circuits (MIMIC) program. (6)

While the initial focus of the MIMIC program was on defense applications, the funding also helped develop ancillary developments in test and measurement, assembly and manufacturing applications. Because of this and other funding, GaAs devices have seen performance, reliability, manufacturability and market share increase appreciably. From the groundbreaking transistor work in the 1940s, the RF semiconductor industry has grown to become a large, vibrant segment of the broader electronics market.

Market Drivers

To predict where the compound semiconductor market will head in 2015 and beyond, it is useful to review the historical performance and the factors that have influenced the growth profile to date. Germanium transistors quickly transitioned to silicon and this technology has enjoyed a tremendous ramp in volume, first with discrete transistors and now with integrated circuits going to process nodes below 20nm. The performance of compound semiconductor-based devices has been superior to silicon and GaAs has become a very mature, high volume technology. Even though other competitive technologies will factor into the prospects for the future of the RF semiconductor market, GaAs is currently the dominant technology.

Figure 1 shows the historical performance of GaAs device revenue from 1999 to a forecast for 2014. It tells an interesting story and illustrates market drivers. There was fast growth in the 1999/2000 timeframe as the .com era hit full stride. The working theory was “build it and they will come”, but the unfortunate reality was there was very little demand for the increased capacity and speed of the new networks. The .com bubble burst; companies built networks, but no one came! As a result, GaAs revenue dropped as quickly as it rose and floundered around at this level for several years.

1502 HFE semiconductors01

Figure 1 • Historical Performance of GaAs RF Device Revenue.


In 2004, the effects of the “wireless revolution” and mobile communications become apparent on GaAs revenue and the revenue trajectory has been steadily upward. The initial attraction of mobile communications was the “anywhere, anytime” aspect of staying in touch. As analog communications evolved into digital and data rates started to increase, the clunky bag phone evolved into a much more sophisticated terminal that is fueling the tidal wave of data consumption. Smartphone usage has grown dramatically and Strategy Analytics estimates that nearly two-thirds of all phones sold in 2014 will be smartphones. The CAGR for GaAs device revenue since 2004 has been almost 11% and this will push 2014 results to an estimated $6.6B.

Cellular Segment

The evidence for how much the GaAs and the RF compound semiconductor industries rely on the cellular segment should be clear in Figure 2. This block diagram, courtesy of TriQuint, shows the front end for a representative smartphone. It includes eleven amplification functions, many of them accommodating multiple transmission bands and eight switching functions, with most of them being multi-throw.

1502 HFE semiconductors02

Figure 2 • Smartphone Block Diagram.

The complication in the front-end is rooted in technical and business considerations. A simplistic analysis shows that data capacity increases as the spectral efficiency increases (more bits/sec/Hz) or the amount of bandwidth increases (more Hz). Wireless operators use both of these techniques by purchasing additional spectrum and developing more sophisticated modulation schemes that allow for wider channel bandwidths. The term “4G” has really become synonymous with faster data speeds. To support higher data rates, network operators have embraced the W-CDMA/UMTS standard that serves as an evolution path for GSM and the newly developed LTE standard. Both use linear modulation schemes that increase spectral efficiency and incorporate flatter network architectures to reduce cost.

The second part of the technical consideration is spectrum and that is a thornier issue. Spectrum is a scarce resource. Since more spectrum cannot be created, the best option is to repurpose it. Governments around the globe are doing this with auctions of reclaimed or underutilized spectrum. This creates some additional challenges for wireless operators, because this additional spectrum may not be close to existing frequency bands, it may not exist over a large geographical footprint, the channel bandwidth may be less than desired and it is expensive!

The final dimension addresses the operator business model. Ideally, operators would like a single handset that covers their entire service footprint and allows users to roam on different networks. This is currently not possible, but operators are pushing to minimize the number of phones they must maintain. To enable this, manufacturers use architectures that incorporate the latest generation of linear PAs, while still accommodating older standards that use saturated PAs. These architectures must accommodate frequency bands that are likely not contiguous and range from 450 MHz to 3.8 GHz. A report from Strategy Analytics identifies 45 E-UTRA WCDMA/LTE bands, with another eight that have been proposed, but not approved. In response to these divergent business requirements, cellular front-end architectures are making more use of multi-mode, multi-band PAs, along with saturated PAs shown as “2G” in the block diagram of Figure 2.

Strategy Analytics estimates that roughly 1.2 billion phones will have block diagrams similar to, or perhaps even more complicated than the one shown in Figure 2. Our research indicates these phones currently handle an average of 4.6 linear bands, in addition to four saturated bands and we expect the number of linear bands to exceed six shortly. Add in approximately 1.4 billion “other” cellular devices (feature phones, tablets, PCs, notebooks, E-readers, etc.) and it is easy to see why GaAs and compound semiconductor revenue is so high in this segment.

GaAs Competitors

In the early days of the wireless revolution, GaAs was the only technology that could provide the performance, frequency coverage, cost and reliability for high volume applications. As volumes and device complexity continue to increase, competitive technologies are beginning to capture market share from GaAs. The best example of this is with the handset switches shown in Figure 2. This application has largely shifted to a Silicon-on-Insulator (SoI) technology that makes use of the high volume processing capabilities of silicon CMOS foundries. Silicon also provides a better opportunity to integrate additional low frequency control circuitry and offers better ESD performance. These and other performance-related features have allowed SoI switches to displace GaAs devices in many of these applications.

While silicon switches stand as the largest volume RF application to capture share from GaAs, power amplifiers represent the largest revenue opportunity for competitive technologies. CMOS-based PAs have steadily been capturing share in entry level, lower tier handsets. Since these applications represent a slowly shrinking opportunity, GaAs PA manufacturers have been willing to cede some market share. The big shock to the GaAs community was the announcement of CMOS PAs that target emerging LTE opportunities. Currently, only Qualcomm and Peregrine Semiconductor have competitive offerings for these applications and they are enduring the usual growing pains, but these devices will take share away from GaAs. In addition to CMOS, SiGe for low power applications and GaN and LDMOS for power applications serve as the main competitive threats to GaAs in RF applications.

Combining these thoughts and accounting for all the technologies that address applications in the RF segment, Figure 3 shows a snapshot of the segmentation of the estimated 2014 RF market revenue.

1502 HFE semiconductors03

Figure 3 • Segmentation of RF Semiconductor Market Revenue.

The competitive technologies add about $2.4 billion of revenue to the GaAs portion of the market, bringing the total to about $9 billion. It should be very clear how important cellular applications are to the overall RF market, accounting for nearly 66% of the revenue. Adding in other wireless applications like Wi-Fi (the second largest revenue segment), base stations, microwave/millimeter wave backhaul, VSAT, etc. increases the wireless segment to nearly 86% of the total RF semiconductor revenue.

With this snapshot of where the RF compound semiconductor industry stands in 2014 and a good understanding of the developments and trends that got us here, where does the industry go in 2015? The overwhelming, insatiable desire to consume increasing amounts of data will influence every future development and trend. It is not hyperbole to say that every trend in the electronics market ultimately ties to data consumption. Figure 4 shows the Cisco VNI (Visual Networking Index) forecast to 2018, with actual data back to 2009. The CAGR for the data in the chart approaches 28%. To put this into perspective, a petabyte of data is equivalent to about 223,000 DVDs. In 2018, this forecast implies users will generate data equivalent to 29.3 billion DVDs…per month!

The wireless segment shows the fastest growth, with a CAGR slightly greater than 77%. Given the segmentation of the RF semiconductor industry shown in Figure 3, this is a promising trend. As impressive as mobile growth is, it will only represent 12% of the total data consumption in 2018! The rest of the data will reside on wired copper, fiber or coaxial cable networks. Increasingly, a bit of data will travel over several of these networks and there are opportunities for RF semiconductors in all these data segments. The revenue and volume associated with the cellular and Wi-Fi segments shifts the spotlight away from some of the other areas of the industry, but there are a number of interesting developments taking place in these segment.

I think the discussion of the history, trends, drivers and present state of the RF semiconductor industry focuses the view of 2015 and beyond. The last section includes my thoughts on some of the important topics in the industry. Some are obvious, but some will require a bit of faith:

Data Consumption: This engine drives the entire semiconductor market. Any substantial changes to the trajectory presented in Figure 4 would have catastrophic repercussions for the semiconductor industry. Even though the last couple of data forecasts have shown declining growth forecasts, the numbers remain large. There are developments like 4K or UHD TV, the Internet of Things (IoT), increasing HD and UHD video uploading to social networking sites and the ongoing arms race between fiber and coax to provide the highest data rates that will sustain and probably increase data consumption rates.

RF CMOS: This is an easy trend to call. CMOS-based amplifiers and switches will continue to capture market share from GaAs. The trendy discussion in the industry has been “the death of GaAs”. This is unlikely to happen anytime soon. CMOS has undeniable performance, integration and cost benefits, but this technology works best with high volume applications that have stable performance requirements. When volumes are lower, mask costs of CMOS affect the cost competitiveness of the technology. RF CMOS revenue will increase in 2015 and beyond, but it will not be the dominant RF technology in the foreseeable future.

MMMB PAs: To address the rise of LTE bands, carrier aggregation and increasing data consumption, multi-mode, multi-band (MMMB) PAs will continue to capture market share in handset front-end architectures. With the price sensitivity of this market, the price of the MMMB PAs cannot exceed the price of the components they replace. This would seem like a bad development for GaAs device revenue, but manufacturers seem to be making the block diagram even more complicated by including more functionality or expanding the number of bands in the phone. The net effect for the GaAs device market will be neutral to positive. The situation will be a bit different for epitaxial substrate manufacturers, because the MMMB trend will mean less production area and a reduction in the $/mm2 metric.

GaN: There has not been much discussion of GaN here, but the technology has turned the corner and it is seeing significant adoption in commercial applications. Defense applications have driven the development and adoption of GaN and the latest Strategy Analytics forecast shows this will continue with defense applications accounting for more than 50% of GaN revenue in 2018. Commercial adoption is increasing quickly with CATV adoption continuing and base station PA applications increasing quickly. VSAT and point-to-point radio applications are also starting to see growth. The vast majority of these devices will be GaN-on-SiC. There is a chorus to push GaN-on-silicon into lower power, high volume applications and functions. The argument is that the lower cost structure will allow the technology to address more applications. Unless there is a disruptive manufacturing development, the realization of this idea appears to be several years down the road, at best.

Trajectory Change

The preceding topics have addressed a shorter time horizon. The final two topics are longer term, with the potential to change the trajectory of the entire industry.

Internet of Things (IoT): This is one of the hottest discussions in the electronics industry. The premise is that deploying a large number of embedded computing sensors and interconnecting them into wide area networks with the Internet will dramatically improve society. The latest Strategy Analytics forecast anticipates more than 33 billion devices connected to the Internet in 2020. The concept involves smart sensors sending data that enables better decisions. This “intelligence” gives rise to applications involving telemedicine, “smart” cities and utilities, industrial automation, security and a whole host of others. The connected devices and networks will create a vibrant service economy, which will provide substantial revenue. This concept is a superset of M2M communications, Wi-Fi, cellular terminals and the whole host of devices already connected to the Internet, so it is clear that the IoT is already happening. The use cases currently involve low data rate, low power applications, so silicon-based semiconductors manufactured in high volume seem the most likely choice. With the breadth of devices and applications included in the IoT concept, there is little doubt that there will be growth. The challenge will be determining applications for RF compound semiconductors.

1502 HFE semiconductors04

Figure 4 • IP Data Consumption.

5G: If IoT is the most discussed topic, then 5G is running a close second. This concept assumes that existing network architectures will not be able to keep up with the anticipated data consumption increases. This effort will revolutionize the RF industry, because the goal is to increase user data rates, capacity, battery life and network devices by orders of magnitude over existing capabilities. Network deployments may not be until 2020, but development work streams are currently underway, under the auspices of Alcatel-Lucent, Fujitsu, NEC, Ericsson, Samsung and Nokia. This is a disruptive opportunity for the RF semiconductor industry because several of the activities involve developing networks at frequency bands of 5 GHz to 86 GHz, with more available bandwidth. Other concepts under development involve the use of antenna beamforming, beam tracking and massive MIMO. These all play into the strengths of compound semiconductor devices and 5G represents an exciting opportunity for the entire RF semiconductor supply chain.

This is a very exciting time for the RF semiconductor industry. High volume applications are growing, new technologies are gaining traction and new applications are in development to handle the tidal wave of data consumption. There will undoubtedly be twists and turns, along with a surprise or two along the way, but the future for the industry looks rosy.

About the Author:

Eric Higham serves as Director, Advanced Semiconductor Applications Service, Strategy Analytics. He has held various positions in engineering, applications, business development and marketing at Raytheon, MicroDynamics and M/A-COM. He received a BSEE from Cornell University with a concentration in solid-state semiconductors and an MSEE from Northeastern University with a concentration in Fields, Waves and Optics.


(1) Bardeen, J. and Brattain, W. “Three-Electrode Circuit Element Utilizing Semiconductor Materials,” U. S. Patent 2,524,035 (Filed June 17, 1948, issued Oct. 3, 1950).

(2) Shockley, W. “Circuit Element Utilizing Semiconductive Material,” U. S. Patent 2,569,347 (Filed June 26, 1948. Issued September 25, 1951)

(3) Teal, Gordon K. “Methods of Producing Semiconductive Bodies,” U. S. Patent 2,727,840 (Filed June 15, 1950. Issued December 20, 1955)

(4) Teal, G. K. “Some recent developments in silicon and germanium materials and devices,” Presented at the National Conference on Airborne Electronics Dayton, Ohio (May 10, 1954)

(5) Zolper, John Z. “A DARPA Perspective on the Future of Electronics”, CSMantech Presentation, 2003.

(6) Cohen, Eliot D. “MIMIC from the Department of Defense Perspective”, IEEE Trans. Microwave Theory and Technique, vol. 38, no. 9, 1171 (1990).

Your sweat may bring medical diagnostics to Fitbits and Fuelbands

By Jason Heikenfeld

We may soon, however, learn to like our sweat a lot more—or at least what it can reveal about our health. We’d certainly prefer giving a doctor a little sweat to being punctured for a blood test—or even providing a urine sample—as long as we didn’t have to run a mile or sit in a sauna to do it. And if sweat could provide constant updates about our bodies’ reactions to a medication, or track head trauma in athletes, we might just start to appreciate it.

Sweat contains a trove of medical information and can provide it in almost real time. And now you can monitor your sweat with a wearable gadget that stimulates and collects it using a small patch and analyzes it using a smartphone—that is, if you visit my lab.

human os iconUsing sweat to diagnose disease is not new. For decades, doctors have screened for cystic fibrosis in newborns by testing their sweat. And in the 1970s several studies tried using sweat to monitor drug levels inside the body. But in the early days of sweat diagnostics, the process of collecting it, transporting it, and measuring it was vastly more complicated than an ordinary blood test, so the technology didn’t catch on.

That’s about to change. Researchers have discovered that perspiration may carry far more information and may be easier to stimulate, gather, and analyze than previously thought.

My group at the University of Cincinnati, working with Joshua Hagen and other scientists at the U.S. Air Force Research Laboratory, at Wright-Patterson Air Force Base, in Ohio, began five years ago to look for a convenient way to monitor an airman’s response to disease, medication, diet, injury, stress, and other physical changes during both training and missions. In that quest, we developed patches that stimulate and measure sweat and then wirelessly relay data derived from it to a smartphone. In 2013 the Air Force expanded on my group’s work and that of our collaborators by sponsoring the Nano-Bio-Manufacturing Consortium, in San Jose, Calif., created to accelerate the commercialization of biomonitoring devices such as sweat sensors.

Illustration: James Provost
Perspiration Detective: This patch, developed at the University of Cincinnati, uses paper microfluidics to wick sweat from the skin through a membrane that selects for a specific ion, such as sodium. Onboard circuitry calculates the ion concentration and sends the data to a smartphone. The electronics within the patch are externally powered, as in an RFID chip. 

My colleagues and I started by looking for something sweat could reveal that would be useful to a large number of people. We settled on monitoring physical fatigue—in particular, alerting athletes if they were about to “crash” because of overexertion or dehydration. This problem may sound mundane, but it is hard to predict. Even million-dollar athletes regularly leave competitions because of cramping, and warning of an approaching imbalance in electrolytes could prompt an athlete to take in fluids to avoid such a mishap.

With the testing of athletes in mind, we started by measuring the substances dissolved in sweat. You probably know, thanks to decades of commercials for Gatorade, that sweat is rich with electrolytes, electrically charged ions of elements like sodium, chlorine, and potassium, with concentrations from ones to tens of millimoles per liter. (In biological terms, that is actually a lot: Normally, blood has a 3.5 to 5.2 millimolar concentration of potassium. That is, it contains 3.5 to 5.2 millimoles of potassium per liter.) Ideally, we wanted to figure out the balance of electrolytes in sweat and how it correlates to the balance of electrolytes in the blood, because it is an imbalance of electrolytes in the blood that causes severe symptoms of dehydration like muscle cramping.

Measuring the saltiness of sweat doesn’t turn out to be particularly useful for monitoring athletes, because levels of sodium and chloride in sweat don’t correlate with any particular changes in blood levels. That’s because the cell membranes lining the sweat gland act as “salt pumps.” When messages from the central nervous system trigger the membranes to push negatively charged chloride ions out, they drag positively charged sodium ions with them, maintaining a neutral charge in the sweat duct. The insides of the cells become less salty than their exteriors. This imbalance draws water through the cell membranes into the sweat duct, until the sodium and chloride concentrations again match; as a result, the cells shrink until they can replenish themselves by pulling in water and salt from adjacent cells. The process repeats to create more and more sweat.

On the other hand, measuring levels of sodium and chloride in sweat is essential in diagnosing cystic fibrosis. The cells lining the upper portion of the sweat ducts normally reabsorb most of the salt that is produced by the sweat creation. (The body is smart; we need to retain those electrolytes.) But for patients with cystic fibrosis, the cells that handle that reabsorption don’t work properly, and simple benchtop equipment in a doctor’s office can detect the presence of saltier-than-normal sweat.

With sodium and chloride off the table, we looked at a number of other substances in the blood whose levels increase when the body gets dehydrated and that diffuse into sweat in a more orderly way, meaning that when they appear in high concentration in sweat, they must be at a high concentration in the blood. Like sodium and chloride, these are small ionic solutes in sweat—ones that I can’t specify here, unfortunately, because of confidentiality agreements.

Although we couldn’t use them directly to gauge dehydration, we weren’t quite done with sodium and chloride. We found that the faster the sweating, the saltier the sweat (because there is less time for the body to reabsorb the sodium and chloride). Correlating the levels of electrolytes in sweat with their levels in blood isn’t exactly straightforward. That’s because their diffusion from blood into sweat is slow. So as the rate of sweating increased, the telltale substances we were tracking in the sweat became more diluted. By monitoring sodium and chloride levels, too, we could correct our sweat measurements accordingly.

Detecting sodium and chloride ions requires two things: an electrode coated with an ion-selective membrane, and a reference electrode, typically made of silver chloride. The coating for the ion-selective membrane is a standard polymer—like the plastic used to make plumbing pipes—through which ions have great difficulty penetrating, along with a special ionophore molecule that allows the passage of only one type of ion. If the ionophore is for sodium, sodium is able to easily penetrate into the polymer coating, and because sodium is a positively charged ion, a voltage of several millivolts builds up. Because the voltage of the reference electrode does not change, you can measure the total voltage of a circuit by connecting the two electrodes with a meter, calculating the voltage induced by the ion-selective membrane, and from that calculating the ion concentration. As sodium and chloride generation by sweat are interrelated, you also obtain a simple measurement of chloride.

That’s how we can find out how much salt is in sweat. Trickier is capturing the sweat quickly, getting it to the sensors, and then disposing of it, because you don’t want to hang on to old sweat and mix it with new. We decided to use paper microfluidics, the lowest-cost form of plumbing we could find that would move fluid along the patch. Pregnancy-test sticks use paper microfluidics in this way.

In our patch, the paper wicks sweat in a tree-root pattern, maximizing the collection area while minimizing the volume of paper. To keep the sweat pumping along after it passes through the sensors, these microfluidic channels direct the sweat to a superabsorbent hydrogel, such as the filler used in diapers, which pulls the sweat out of the paper and stores it. The patch can pull sweat along for several hours with the hydrogel swelling only 2 to 3 millimeters, enlarging it to hundreds of times its original volume.

We built a sodium sensor, the voltage meter, a communications antenna, the microfluidics, and a controller chip onto a patch that’s externally powered (like an RFID chip) by a smartphone. We printed it onto a flexible substrate and, with the help of researchers at the 3M Co., coated it with a sweat-porous adhesive so that it could stick to the skin. In tests, this patch performed as well as the benchtop electrolyte-sensing systems used by doctors to test for cystic fibrosis. We have had a couple of people in our research group wearing the patches for as long as a week.

Right now our industry partners are preparing to use standard flexible-electronic manufacturing processes to produce several hundred patches for more extensive human trials, which are expected to start before the end of the year. We’re also adding about a half dozen other sensors that will detect additional ions besides sodium and chloride and use them to predict things like exertion level and muscle injury or damage. The initial results look promising, and if the upcoming human trials go well, it’s not a far stretch to imagine using the patch in conjunction with the RFID-reading mats that already record marathoners’ split times to also identify runners at risk of a dangerous electrolyte imbalance.

This kind of passive patch should work great for athletes, who are usually pumping out plenty of sweat. But my colleagues and I also wanted to measure sedentary people—for example cystic fibrosis patients, who normally don’t sweat much.

The solution is to use an electrical process, called iontophoresis, which stimulates skin to produce sweat. Iontophoresis works by placing an electrically charged medication on the skin and using an electrode and a low current—less than 1 milliampere per square centimeter—to draw the medication into the skin.

Photos: Dottie Stover/University of Cincinnati (3)
Sweat Seekers: University of Cincinnati Ph.D. student Daniel Rose holds a petri dish containing the raw electronics [top] of the sweat-sensing patch he helped create, then tests the patch on his brother, Roger Rose, during a workout [middle]. A smartphone app displays data sent from the patch [bottom left].

Doctors have used iontophoresis for years to push anti-inflammatory drugs through the skin to reach injured tissue. And they have used it in a cystic fibrosis test for newborns to infuse pilocarpine, a medication that stimulates sweat glands, into the skin.

We’ve built the components needed to add this same capability to our patch. By carefully controlling the current that drives the iontophoresis, and therefore the absorption of pilocarpine, we can keep sweat flowing on as big or as small a spot as we want for hours, and possibly even days, at a time.

Electrolytes are by far the easiest component of sweat to measure. But metabolites—like lactate, creatinine, and glucose—shouldn’t be too much harder.

The lactate level is a great indicator of a person’s ability to cope during rigorous exercise or while on life support. Lactate, or lactic acid, is a by-product of burning glucose without oxygen. Therefore, when the body is not getting enough oxygen, it generates more lactate. Higher concentrations of creatinine and urea indicate an unhealthy kidney struggling to clear waste products from the body. In people with chronic kidney disease, so much urea is excreted with sweat that the accumulation of uric acid crystals makes the skin look frosted. And glucose monitoring, of course, is key to managing diabetes.

As yet, we have not found a way to predict exact blood levels by measuring these metabolites in sweat. So using sweat to monitor glucose levels, as desirable as that would be, is still out of reach. But being able to sense a general increase or decrease in metabolites, even without knowing their exact concentrations, can still be valuable, as Joseph Wang and his colleagues at the University of California, San Diego, recently demonstrated.

Wang’s team built a tattoolike electronic sweat sensor and had test subjects wear it during a vigorous cycling routine. Measuring a change in lactate, Wang found, might be sufficient to warn that an athlete was going to “hit the wall.” Joshua Windmiller, a former student of Wang’s, has started a company,Electrozyme, to commercialize the technology.

Metabolites like lactate in sweat are in the micromolar to millimolar range, still a relatively high biological concentration and easily measurable with a simple circuit. You again coat an electrode, but here the coating includes an enzyme specific to a particular metabolite, such as glucose oxidase or lactate oxidase. (Enzymes lower the amount of energy needed to cause a reaction.) Lactate oxidase, for example, breaks lactate into pyruvate and peroxide. The energized electrode steals two electrons from each molecule as it breaks the peroxide into oxygen and two hydrogen protons. Because lactate oxidase affects only lactate, only more lactate can generate more electrons, so any change in lactate concentration shows up as a change in current through a sensing circuit.

More work in developing this type of sensor is needed, though, because when sweat glands work really hard, they also generate their own lactate, which can skew the data. The measurement of some other metabolites with this technique, however, isn’t subject to this problem. For example, the sensors that measure current also work well for other molecules that react in the presence of an enzyme, including urea, which in addition to signaling kidney health shows a substantial increase in both blood and sweat when dehydration reaches a dangerous point.

Compared with ions and metabolites,many of the biomarkers that doctors rely on for diagnosis of stress, disease, poor nutrition, injury, and other conditions are far harder to detect because they are found in blood and sweat in mere nanomolar to picomolar concentrations (a mere billionth or trillionth by weight). But detecting their presence in sweat is nevertheless possible.

Lately, some of the hardest-to-measure biomarkers—small-protein cytokines—are generating the most excitement. Cells release cytokines under a number of circumstances, including trauma, infection, and cancer. For example, the concentration of a cytokine called interleukin 6 (IL-6) can increase up to a thousandfold during an infection.

Esther Sternberg and her colleagues at the University of Arizona recently demonstrated that several cytokines, including IL-6, have the same concentration in sweat as they do in blood. This means doctors could use sweat to diagnose a wide variety of physical and mental stresses. Right now, though, the tools needed to measure the nanomolar to picomolar concentrations of cytokines in sweat are as big as a refrigerator, or at best a suitcase. The trick here is getting the technology down to the size of a wearable gadget. My colleagues and I think that’s possible and are working toward that goal.

The basic problem is this: These biomarkers are present at such low concentrations that they can’t generate enough voltage or current themselves to overcome noise. A better strategy would be to coat an electrode with a biorecognition element—basically a biochemical puzzle piece custom designed to selectively match up to, grab, and hold the biomarker we are trying to sense. We would then apply an alternating electrical signal to the electrode. As biomarkers gather on the electrode, they should act as a barrier to electrical current, increasing the electrical impedance in a measurable way.

A more exotic and sensitive approach would be to add a molecule called a redox couple to the top of the biorecognition element. A redox couple inserted in an electrochemical process makes it easier for electrons to move from a solution to an electrode. When the biomarker binds to the biorecognition element, it changes the shape of the element, bringing the redox couple closer to the electrode, close enough to allow it to dramatically increase the flow of current. With support from the National Science Foundation and a biochemist in my lab, we recently demonstrated sensing cytokines down to a level of less than a 1-picomolar concentration using this technique.

The Air Force is interested in the possibility of measuring cytokine biomarkers to monitor extreme stresses on pilots. And it is even more interested in neuropeptide biomarkers that can give clues to the state of the brain, like one called Orexin-A, a neuropeptide biomarker that measures alertness.

In an attempt to push the limits of detection of biomarkers even further, investigators at the Air Force Research Laboratory led by Rajesh Naik are coating nanowires, nanotubes, and graphene electrodes as part of a field-effect transistor with biorecognition elements. These researchers have already built sensors capable of measuring biomarkers present at only a 1/100-picomolar concentration in simulated sweat. The issue here, for now, is that nanowires and graphene are still a bit exotic and not yet easily manufacturable.

Ultimately, sweat-sensing patches will measure multiple electrolytes, metabolites, and other biomarkers at the same time. Their designers will no doubt have to devise some clever algorithms to account for differences in the way various electrolytes, metabolites, and biomarkers migrate into sweat. But it will be worth the effort. Being able to measure multiple biomarkers might allow physicians to conduct cardiac stress tests on a treadmill without drawing blood. They could also measure the impact of drugs on the body so that dosages could be determined more precisely, as opposed to the crude estimates we use now based merely on age and body weight.

There is still work to do on the digital signal processing and algorithms needed to analyze the raw electrical measurements of biomarkers in sweat. But a physical-exertion sensor patch is a near reality, about to be tried on hundreds of people. If all goes well, we could have sweat-sensing patches—at least sensors for athletics—on the market in low volume next year. These do not have to go through a lengthy approval process with the U.S. Food and Drug Administration because they are not meant to be used for diagnosis or treatment of disease.

The second-generation patch we’re now working on in the lab is nearly complete. It includes secure Bluetooth communication, data storage, and a small microcontroller to detect higher-frequency and more complex signals from the electronic sensors on the patch. Analysis of these more sophisticated waveforms is critical for the detection of the really low-concentration biomarkers, like cytokines. Ultimately, sweat analysis will offer minute-by-minute insight into what is happening in the body, with on-demand sampling in a manner that is convenient and unobtrusive.

Researchers have understood the richness of the information carried in sweat for some 50 years, but they have been unable to take advantage of it because of the difficulty of collecting, transporting, and analyzing the samples. With the many recent advances in sensing, computing, and wearable technology providing inspiration—and with more than a little perspiration in the laboratory—we are on the verge of a true revolution in wearable diagnostics.

This article originally appeared in print as “Let Them See You Sweat.”

About the Author

Jason Heikenfeld is a professor of electrical engineering and director of the Novel Devices Laboratory at the University of Cincinnati. A founder of Gamma Dynamics, a maker of electrofluidic displays, he wrote IEEE Spectrum’s 2010 article “The Electronic Display of the Future.” An avid runner, Heikenfeld has recently focused his research on sweat—something he deals with both in and out of the lab.


Dr. Geoff Taylor writing ebeam gate features on a POET wafer

Dr. Geoff Taylor writing ebeam gate features on a POET wafer

Toronto, Ont., Canada — An exciting new technology leap in semiconductor chip design based on a combination of optics and GaAs promises to change integrated circuits drastically, making them up to 10X to 100X faster than conventional silicon while reducing power consumption 80% — making the development very eco-friendly. Prototypes will be ready for display and testing with third parties by the end of 2014.

The new development comes from POET Technologies (TSX: PTK and OTCQX: POETF), a publicly listed Company and the developer of the “POET” Platform. POET’s head office is in Toronto, ON, Canada, and its research and development lab is in Storrs, CT. POET designs III-V semiconductor devices for military, industrial and commercial applications, including infrared sensor arrays and ultra-low-power random access memory. POET Technologies has several patents issued and pending for the POET process, with potential high speed and power-efficient applications in devices such as servers, tablet computers and smartphones. It has been the company’s mission to provide a valid solution to the ageing designs used in traditional silicon CMOS.

The Company’s name is an acronym for “Planar Opto-Electronic Technology”, a revolutionary III-V process used to monolithically build electrical, optical, and electro-optical integrated circuits. POET supports a full range of electrical and optical active and passive circuit components. POET-based devices have the potential to provide very high performance vs. existing silicon-based devices (up to 100X faster) with very low power consumption, up to 80 percent less than existing silicon devices. The POET process is much more versatile than legacy hybridized fabrication of compound semiconductor devices (GaAs, InP, others) and can be implemented using existing CMOS chip-making equipment.

POET will be fully compatible with existing semiconductor design and manufacturing flows, allowing unprecedented integration of functions that take multiple chipsets today into a single chip for large component cost reduction, and — particularly for optics — tremendous (~80 percent) reduction in assembly and test costs.

This breakthrough has been achieved by turning to strained InGaAs quantum wells with indium concentrations of 70 percent or more; mobility and channel velocity increases, and operation of the circuit at 0.3 V, should enable up to a ten-fold gain in performance and up to 80 percent lower power requirements compared to a silicon-based CMOS IC.

18 Years of R&D
Development of this technology started in the early 1990s in the labs at the University of Connecticut. Since then, more than 18 years have been devoted to developing and proving out numerous components of the platform by POET’s Chief Scientific Officer and Director Dr. Geoff Taylor and his team. POET’s business model is to license the III-V semiconductor process technology IP to customers and foundry partners to enable designs, to produce devices that include analog, digital and optical functions on the same die for a variety of markets including, but not limited to, hand-held smartphones and tablets, PCs, servers, data centers, military and industrial applications.

With this technology, the compatibility issue between transistors and the optical devices disappears, and it is possible to form high mobility channels for both the n-type and p-type transistors. One challenge had been the assumption that these high-mobility materials have to be introduced on a silicon substrate. In our case, we use substrates made of GaAs. These are currently available in diameters up to 200mm, and there is no fundamental barrier to the production of 300mm equivalents which is a commercial foundry standard. Our preferred growth technique for depositing III-V layers on this foundation is molecular beam epitaxy (MBE), and this can be applied to substrates of this size. Tier 1 fabs already use this approach to deposit material on 300mm wafers, so the only barrier to a switch of substrate is cost, not availability of the technology. Differences between the price of silicon and GaAs substrates will shrink as shipments of the latter rise, and costs could be further reduced through innovations in substrate release techniques. The POET fabrication process employs most of the same set of foundry tools currently used for silicon CMOS, so minimal reconfiguration is required.

The idea of using GaAs rather than silicon to make digital circuits is not new. During the nMOS era that spanned the 1970s and early 1980s, GaAs MESFET technology was a contender for silicon E/D logic applications. And later, during the development of CMOS, the GaAs HEMT was also considered for high-speed logic circuits.

The key differences to the present technology are that we are now able to integrate both electronic GaAs devices with optical GaAs devices, and we have substituted optical interconnect to eliminate long metal interconnects.

Furthermore, in contrast to other technologies that are trying to go beyond the silicon CMOS barriers, the POET approach uses conventional fab processes to produce its devices together with MBE wafers, the only epitaxial technique to provide precision doping, thickness control and laser quality.

A significant capability of the technology is that the epitaxial process is unmatched in its ability to realize self-assembled quantum dots. Although not a current objective, it turns out that the modulation-doped interface formed with this technology, which is a normally off channel, is ideal for the implementation of the single-electron transistor. This form of transistor can access engineered quantum dots at the interface, which have quantum levels differentiated by spin. It is possible that these single-electron transistors could aid the development of quantum computing, with electron spin providing the quantum variable to form quantum computing logic blocks.

Moore’s Law Revoked
For almost 50 years, Moore’s Law has dictated the pace of technological change. As the number of transistors on a chip double approximately every 1.5 to 2 years, this increases the performance capabilities of computing devices and the many functions they make possible. Unfortunately, with present silicon-based integrated circuits and manufacturing processes, performance and cost improvements under Moore’s Law are increasingly unsustainable, and will soon come to an end.

These physical limitations will increasingly impede electronics manufacturers from continuing to build smarter, faster, more efficient and cheaper devices — including sensors, lasers and computing devices.

By integrating optics and electronics onto one monolithic chip, POET expects to provide its customers with new direction that is no longer strait-jacketed by the limitations of silicon technology.

See also Compound Semiconductor’s magazine (June 2014, page 52-57). The digital edition can be found directly on Compound Semi’s digital edition page here:

Shiva Nathan’s new prosthetic can read patterns in the wearer’s brainwaves and transmit them to a robotic arm, allowing the user to flex the mechanical fingers with nothing more than their thoughts. And if this project needed any more cool points, its creator is 15 years old.

Read more at I Love Science

Photo Source: Shiva Nathan via Parallax

wavessurfer IEEE Tampa

BRL Test is excited to see you in Tampa for the International Microwave Symposium in June 2014!  We hope to speak with you about how our business model is more affordable yet very relevant for your microwave projects compared to the “Latest and Greatest with Planned Obsolecence model ” offered by some.

lifetime support small

The use of drones is a hotly debated topic right now. These robots of the air could be used for spying or they could deliver Amazon packages — no one really knows.

But thanks to YouTube channel TheDmel, we at least know one thing for sure: Drones can groove.

While waiting for the rest of the world to figure out what the heck to do with those enigmatic fliers, the guys at KMel Robotics decided to drop a beat and choreograph a handful of drones for some fun. We can see these talented machines put to good use on Broadway — maybe WALL-E the Musical is right around the corner.

Take a look and see if you could step up to their sweet moves.

Via: Mashable & YouTube


Anyone who has ever ridden a bike in the city knows what it’s like to get a mouthful of exhaust. What if your bike could reduce those fumes and clean the air? A group of designers and engineers from Bangkok-based Lightfog Creative and Design won a Red Dot aware for their air-purifying bicycle designed to scrub polluted air while moving through traffic.

The idea, which is not yet a prototype yet, is that a filter between the handles bars would filter polluted air, scrubbing it of particulates. The frame itself would work something like a leaf, converting sunlight into energy that would, presumably run a fuel cell battery, the by-product of which would be good, clean oxygen for everyone.

CLICK HERE for more!

Sources: I Love Science, Science Channel,

NASA before powerpoint...