Power
FPGA’s – past, present and future
If “history is the best guide to the future” a look back on the ages of FPGA could show what the next generations will bring. The 34 years since FPGA’s invention can be viewed as a progression through several ages, driven by the push of applications demand and the pull of process technology. Ivo Bolsens, Xilinx Inc., CTO and senior vice president tell us more
T 1984-1992 – The age of invention
he first FPGA, the Xilinx XC2064, contained what we’d describe today as 64 logic cells—less than 1000 gates. The XC2064 had only 64 flip-flops but cost hundreds of dollars because its die was so large. Yield is super-linear for large die, so a five per cent die-size increase would double costs and could have dropped yield to zero. Cost-containment was not a question of mere optimisation; it was a question of survival. This spawned architectural and process innovations to maximise FPGA design efficiency. One key advance was the move to antifuse-based FPGAs, which eliminated the on-chip area penalty for SRAM storage, albeit at the loss of re-programmability. In 1990, the largest capacity FPGA was the Actel 1280, which was antifuse-based. Quicklogic and Crosspoint followed suit.
During the Age of Invention, FPGAs
were much smaller than the applications users wanted to put into them. Multiple- FPGA systems became popular and multi- chip partitioning software became an important component of FPGA design suite. Radically different FPGA architectures precluded universal design tools, so FPGA vendors took on the added burden of EDA development for their own devices. Manual design and optimisation were often necessary because of limited routing resources on the chips.
1992-1999 – The age of expansion Through the early 1990s typical FPGA start-up companies were fabless, unable to access leading-edge silicon technology. So, the dawning age of expansion saw FPGAs lagging the IC-process curve. Later in the decade, IC foundries realised that FPGAs made ideal process drivers and FPGAs became process pipe cleaners. Each process generation doubled the number of transistors available, which halved the cost per function and doubled the size of the largest available FPGA. Chemical- mechanical polishing (CMP) allowed foundries to stack more metal layers on an IC, enabling FPGA vendors to increase on- chip interconnect and accommodate larger LUT capacities (see Figure 2). Area was not as precious as in the age of invention. There was now more scope
26 November 2018
for trading-off performance, features and ease of use. Larger FPGA-based designs required synthesis tools with automatic placement and routing, so the end of the 1990s made automated synthesis, placement, and routing essential steps in the design process. FPGA companies depended on EDA tools and FPGA capabilities. Most importantly, the easiest way to drive capacity up and lower the cost became targeting the next process technology node. SRAM-based FPGAs were first to adopt each new node so could us denser processes immediately. Antifuse-based FPGAs lost their competitive edge.
2000-2007 – The age of accumulation
By the millennium, FPGAs were common components in digital systems. Capacity and design size grew rapidly, and FPGAs had found a huge market in data communications. Switching from custom silicon to FGA combined lower cost with less risk.
By now, FPGAs were larger than the typical problem size: increased capacity alone didn’t guarantee market growth.
Figure 2: The growth of FPGA LUTs and interconnect wires over time. Wire length is measured in millions of transistor pitches
FPGA vendors addressed this challenge in two ways. To meet the new needs of lower-end markets, vendors re-focused on efficiency and produced families of lower- capacity, lower-performance, “low-cost” FPGAs such as the Xilinx Spartan FPGA families. At the high-end, FPGA vendors made it easier for customers to fill up the biggest FPGAs by producing libraries of soft logic (IP) for functions like memory controllers, communications and Ethernet MACs - even soft microprocessors such as the Xilinx MicroBlaze processor. FPGA design changed in the 2000s. Large FPGAs accommodated very large designs that were complete subsystems. By the end of the age of accumulation, arrays of gates had morphed into collections of complex functions integrated through programmable logic. They became systems.
2008-2017 – The age of SoC As FPGAs combined system blocks such as high-speed transceivers, memories, DSP processing units and complete processor subsystems, they also incorporated important security features like bitstream
encryption and authentication. System functions offered powerful mixed signal processing; and control functions for system temperature and power monitoring and management became ubiquitous. This period introduced a major step towards improving programming efficiency: High Level Synthesis (HLS). Extended by software-like flows using OpenCL, C, C++, it marked a major step forward in ease of use and hardware efficiency.
2018-?? – The age of adaptive compute acceleration As Moore’s Law breaks down, traditional computing systems can no longer rely on improved performance, energy efficiency and cost. But requirements like real-time performance, privacy and communication bandwidth create the need for more powerful and efficient computing capabilities at the edge of the network. Domain-specific computing may be the solution, offering a customisable architecture and programming environment tailored to particular application domains. Whereas faster compute hardware can be essential to improve the performance of a system, other aspects such as efficient memory access, interconnect bandwidth, deterministic behaviour and low latency are of equal importance.
Adaptable compute acceleration platforms (ACAPs) will assume a central role in future computing systems in the cloud and the edge infrastructure. They provide a parallel and heterogeneous programmable platforms that can be customised to meet requirements of varied application domains. Where does this lead? Programmability and adaptability are of fundamental value. Efficient customisation of logic functions, memory access, connectivity for data movement, and compute parallelism will all be required to accelerate workloads. The age of adaptive compute acceleration is here to stay. Only history will tell how long.
Figure 1: Xilinx FPGA attributes relative to 1988. Price and power are scaled up by 10,000x Components in Electronics
www.xilinx.com
www.cieonline.co.uk
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52