abstracts and bios

Download Report

Transcript abstracts and bios

DESIGN AND VALIDATION CHALLENGES
OF
MULTI-CORE SYSTEMS IN NANOSCALE SILICON
INTEL’S ANNUAL SYMPOSIUM ON VLSI CAD AND VALIDATION
July 9-10, 2007
Technion “Taub Building for Computer Science” Haifa, Israel
Professor’s Bios and Abstracts
Moshe Levinger – from IBM Labs Haifa
Daniel Kroening – from ETH Zurich
Sharad Malik – from Princeton US
Nikolaj Bjorner – from Microsoft Research US
Moshe Vardi – from Rice University US
Ofri Wechsler – from Intel Haifa
Eby Friedman – from Rochester US
George Stamoulis – from Athens University Greece
Ran Ginosar – from Technion Haifa
Ken Stevens – from Utah US
David Blaauw – from University of Michigan US
Sergey Gavrilov – from ICI Russia
Alon Gluska – from Intel Haifa
1
Simulation-Based, Verification Technologies at IBM
By: Moshe Levinger
Abstract
This talk will provide an overview of the key simulation-based verification
technologies that are being developed nowadays in IBM. It will illustrate
how the very nature of these state-of-the-art tools as well as their s/w
architecture are being driven by two main forces: the need for increased
h/w quality and the requirement for dramatic improvements in
verification productivity.
The talk will also provide a high-level view of the usage methodology for
these tools and technologies as they are being applied to the verification
of high-end h/w designs at IBM. Lastly, several future challenges and
possible directions to address them - will be described.
Bio
Moshe Levinger is currently managing the 'Verification and Services'
department at the IBM Research lab in Haifa. His department is focusing on
research and development of technologies and applications in three main
areas: simulation-based verification, Constraint Satisfaction and Machine
Learning. Moshe joined IBM in 1992 and has worked in the area of
verification technologies since then. Moshe has a dual-BSc degree in both
linguistics and computer science, from the Hebrew University in Jerusalem.
He also has an MSc degree from the Technion, in the area of Computational
Linguistics.
Back
Prof’ list
Sequential equivalence checking for C and RTL Verilog
By: Daniel Kroening
Abstract:
It is common practice to write C or C++ models of circuits due to the
greater simulation efficiency. Once the C program satisfies the
requirements, the circuit is designed in a hardware description language
(HDL) such as Verilog. It is therefore highly desirable to automatically
perform a correspondence check between the C model and a circuit given
in HDL. We present an algorithm that checks consistency between an
ANSI-C program and a circuit given in Verilog using Predicate
Abstraction. The algorithm exploits the fact that the C program and the
circuit share many basic predicates.
Bio
Daniel Kroening received the M.E. and PhD degrees in computer science
from the University of Saar-land, Saar-brucken, Germany, in 1999
and 2001, respectively. He joined the Model Checking group in the
Computer Science Department at Carnegie Mellon University, Pittsburgh
PA, USA, in 2001 as a Post-Doc. In 2004, he was appointed as an assistant
professor at the Swiss Technical Institute (ETH) in Zurich, Switzerland.
His research interests include automated formal verification of hardware
and software systems, decision procedures, embedded systems, and
Hardware software co-design.
Back
Prof’ list
Verification Driven Formal Architecture and Micro
architecture Modeling
By: Sharad Malik
Abstract
Hardware verification is a major limiting factor in modern complex
design, with an increasing “verification gap” separating the scale of
designs that we can reasonably verify and that we can fabricate.
I will argue that a significant contributor to this gap is the limitation of
the Register-Transfer-Level (RTL) model for design verification. Higher
level models such as SystemC and System Verilog aim to raise the level
of abstraction to enhance designer productivity. However, overall they
are limited as they provide for executable but not analyzable
descriptions. Next, I will propose the use of formal analyzable design
models at two distinct levels above RTL, the architecture and the
microarchitecture level. The key characteristic of these models is that
they are based on concurrent units of data computation termed
transactions. The architecture level describes the computation/state
updates in the transactions and their interaction through shared data.
The microarchitecture level adds to this the resource usage in the
transactions as well as their interaction based on shared resources.
This generalizes the notions of architecture and microarchitecture to
beyond programmable processor design. We then illustrate the
applicability of these models in a top-down verification methodology
which addresses most of the concerns of current methodologies.
Bio on the Next page
Back
Prof’ list
Bio
Sharad Malik received the B. Tech. degree in Electrical Engineering
from the Indian Institute of Technology, New Delhi, India in 1985 and the
M.S. and Ph.D. degrees in Computer Science from the University of
California, Berkeley in 1987 and 1990 respectively. Currently he is
Professor in the Department of Electrical Engineering, Princeton
University. His research spans all aspects of Electronic Design
Automation. His current focus areas are the synthesis and verification
of digital systems and embedded computer systems. He has received
the President of India's Gold Medal for academic excellence (1985), the
IBM Faculty Development Award (1991), an NSF Research Initiation
Award (1992), Princeton University Rheinstein Faculty Award (1994),
the NSF Young Investigator Award (1994), Best Paper Awards at the
IEEE International Conference on Computer Design (1992), ACM/IEEE
Design Automation Conference (1996), IEEE/ACM Design Automation
and Test in Europe Conference (2003); the Walter C. Johnson Prize for
Teaching Excellence (1993) and the Princeton University Engineering
Council Excellence in Teaching Award (1993, 1994, 1995). He
serves/has served on the program committees of DAC, ICCAD and
ICCD. He served as the technical program co-chair for DAC in 2000 and
2001, Panels Chair for 2002 and Vice-Chair for 2003. He is on the
editorial boards of the Journal of VLSI Signal Processing, Design
Automation for Embedded Systems and IEEE Design and Test. He is a
fellow of the IEEE. He has published numerous papers, book chapters
and a book (Static Timing Analysis for Embedded Software) describing
his research.
His research in functional timing analysis and propositional satisfiability
has been widely used in industrial electronic design automation tools.
Back
Prof’ list
Formal Verification at Microsoft Research
By: Nikolaj Bjorner
Abstract
His talk presents some of the more recent and ongoing advances in
software verification tools using automated verification technology at
Microsoft Research. Established tools include the static driver verifier,
SLAM which uses decision procedures and symbolic model checking,
and constraint solvers around the static analysis tools Prefix and ESP.
Current activities related to automated verification at Microsoft edmond
include the Spec# and Pex projects. The Spec# project comprises of an
object oriented programming language extending C# with contracts
together with an verification oriented intermediary format, called
Boogie, that is used for extracting verification conditions. We have very
recently initiated an ambitious project in the context of Boogie:
a comprehensive functional verification of the Viridian Hyper visor
virtualization core. The Pex and related projects use automated decision
procedures to analyze the results of runs for generating unit tests by
using model finding capabilities of the decision procedures and
constraint solvers. The talk will cover these applications and the unique
opportunities they provide for enhancing validation during the software
development process. I will also cover some of the tools that are being
developed for automated verification, including the Z3 solver.
Bio on the Next page
Back
Prof’ list
Bio
Nikolaj Bjorner is a researcher at Microsoft Research in Redmond. He
is currently working with Leonardo de Moura on a next generation SMT
(Satisfiability Modulo Theories) solver called Z3.
Recently Nikolaj worked in the Core File Systems group where he
designed and implemented the synchronization core of DFS-R
(Distributed File System Replication), which is part of Windows
Server 2003-R2, Vista, and MSN Messenger. He also designed parts of
the RDC (Remote Differential Compression) protocol, used for
compressing file transfers. In a much more distant previous life, he
developed STeP, the Stanford Temporal Prover.
Back
Prof’ list
Formal Techniques for SystemC Verification
Abstract
By: Moshe Vardi
SystemC has emerged lately as a de facto, open, industry standard
modeling language, enabling a wide range of modeling levels, from RTL
to system level. Its increasing acceptance is driven by the increasing
complexity of designs, pushing designers to higher and higher levels of
abstractions.
While a major goal of SystemC is to enable verification at higher level of
abstraction, enabling early exploration of system-level designs, the
focus so far has been on traditional dynamic validation techniques. It is
fair to see that the development of formal-verification techniques for
SystemC models is at its infancy. In spite of intensive recent activity in
the development of formal-verification techniques for software,
extending such techniques to SystemC is a formidable challenge. The
difficulty stems from both the object-oriented nature of SystemC, which
is fundamental to its modeling philosophy, and its sophisticated eventdriven simulation semantics.
In this talk we discuss what is needed to develop formal techniques for
SystemC verification, augmenting dynamic validation techniques. By
formal techniques we refer here to a range of techniques, including
assertion-based dynamic validation, symbolic simulation, formal test
generation, explicit-state model checking, and symbolic model checking.
Bio on the Next page
Back
Prof’ list
Bio
Moshe Y. Vardi is George Professor in Computational Engineering,
Director of the Computer and Information Technology Institute
and former chair of the Computer Science Department at Rice
University. He is a Fellow of the Association for Computing
Machinery and American Association for the Advancement of
Science, as well as a member of the US National Academy of
Engineering.
A longer version is at
http://www.cs.rice.edu/~vardi/bio.html
Back
Prof’ list
Design and validation challenges
of Mult-Core systems - Intel experience
By: Ofri Wechsler
Abstract
Invited Talk
Bio
Intel Fellow, Mobility Group Director, Mobility Microprocessor
Architecture INTEL CORPORATION Ofri Wechsler is an Intel Fellow in
the Mobility Group and director of Mobility Microprocessor
Architecture at Intel Corporation. In this role, Wechsler is responsible
for the Yonah and Intel® Core™ microarchitectures, and is actively
working on future generations.
Previously, Wechsler served as platform architect for the Timna
platform. He also served as manager for the Israel Design Center
Architecture Validation team, responsible for the validation of the
P55C version of the Intel® Pentium® processor. Wechsler joined Intel
in 1989 as a design engineer for the i860.
Wechsler received his bachelor’s degree in electrical engineering
from Ben Gurion University, Beer Sheva, Israel, in 1998. He has four
U.S. patents.
Back
Prof’ list
Substrate Noise Coupling in Mixed-Signal Systems
By: Eby Friedman
Abstract
The integration of digital, analog, and RF circuits has become ubiquitous
in modern integrated circuits to achieve high performance and reduced
cost. A major issue in mixed-signal systems is substrate noise coupling.
Two aspects of substrate noise coupling will be discussed in this talk. In
the first part, challenges to analyzing substrate coupling in large scale
mixed-signal circuits will be presented. A substrate contact merging
algorithm based on spatial voltage differences will be proposed to
significantly decrease the complexity of existing substrate extraction
techniques. The algorithm determines different voltage domains over the
substrate. Each voltage domain is represented by only one noise source,
reducing the overall complexity of the analysis. Furthermore, the total
error is bounded, supporting tradeoffs between complexity and accuracy.
In the second part, existing substrate noise reduction techniques will be
reviewed and a new substrate biasing methodology will be presented
based on noise aware cell design. Standard cells with dedicated contacts
will be proposed to replace the aggressor cells in a circuit. The
advantages of the proposed scheme such as greater noise reduction,
application flexibility, and possible integration into existing design
automation tools will be discussed.
Bio on the Next page
Back
Prof’ list
Bio
Eby G. Friedman received the B.S. degree from Lafayette College in 1979 and the M.S.
and Ph.D. degrees from the University of California, Irvine, in 1981 and 1989,
respectively, all in electrical engineering. From 1979 to 1991, he was with Hughes
Aircraft Company, rising to the position of manager of the Signal Processing Design
and Test Department, responsible for the design and test of high performance digital
and analog IC's. He has been with the Department of Electrical and Computer
Engineering at the University of Rochester since 1991, where he is a Distinguished
Professor, the Director of the High Performance VLSI/IC Design and Analysis
Laboratory, and the Director of the Center for Electronic Imaging Systems. He is also a
Visiting Professor at the Technion - Israel Institute of Technology. His current research
and teaching interests are in high performance synchronous digital and mixed-signal
microelectronic design and analysis with application to high speed portable
processors and low power wireless communications. He is the author of more than
300 papers and book chapters, several patents, and the author or editor of eight books
in the fields of high speed and low power CMOS design techniques, high speed
interconnect, and the theory and application of synchronous clock and power
distribution networks. Dr. Friedman is the Regional Editor of the Journal of Circuits,
Systems and Computers, a Member of the editorial boards of the Analog Integrated
Circuits and Signal Processing, Microelectronics Journal, Journal of Low Power
Electronics, and Journal of VLSI Signal Processing, Chair of the IEEE Transactions on
Very Large Scale Integration (VLSI) Systems steering committee, and a Member of the
technical program committee of a number of conferences. He previously was the
Editor-in-Chief of the IEEE Transactions on Very Large Scale Integration (VLSI)
Systems, a Member of the editorial board of the Proceedings of the
IEEE and IEEE Transactions on Circuits and Systems II: Analog and Digital Signal
Processing, a Member of the Circuits and Systems (CAS) Society Board of Governors,
and Program and Technical chair of several IEEE conferences, and a recipient of the
University of Rochester Graduate Teaching Award, and a College of Engineering
Teaching Excellence Award. Dr. Friedman is a Senior Fulbright Fellow and an IEEE
Fellow.
Back
Prof’ list
Concurent optimization of power network
and circuit topology
Abstract
By: George Stamoulis
We propose a new approach to IC design in which we concurrently
optimize the power supply network as well as the circuit topology and
the transistor sizes. This approach is supported by new power grid
optimization techniques based on Extreme Value Theory and linear circuit
formulations, and by circuit reconfiguration and sizing capabilities that
combine fast convergence with a constrained search of the design space.
The main targets of this approach are delay and dynamic power,
with significant impact on leakage and SER.
Bio
Georgios I. Stamoulis received the Ph.D. degree from the University of
Illinois, Urbana-Champaign, in 1994. He then joined the Electrical and
Computer Engineering Department of the University of Iowa as a Visiting
Assistant Professor, and in 1995 he joined Intel Corp. where he worked on
CAD tools and design methodologies for power reduction.
In 2001 he joined the Technical University of Crete as an Assistant
Professor. Since 2003, he has been an Associate Professor at the University
of Thessaly, Greece. His research interests include CAD tools and design
methodologies for low power and reliability of ICs, low-power architectures,
and wireless sensor networks.
Back
Prof’ list
Clocking and Synchronization Issues
in Multi-Core Systems
By: Ran Ginosar
Abstract
Multi-Core Systems (MCS), comprising homogeneous CMP and
heterogeneous MPSoC, employ multiple clock domains. The clocking
systems must (a) use minimum dynamic and static power, and (b) provide
for smooth synchronization when crossing clock domain boundaries. In
this talk we review power efficient clock distribution, multi-synchronous
clocks and interfaces, and multi-frequency clocks for MCS. Low power
clock distributions avoid global skew minimization. They employ
hierarchical distribution, local skews, metal bridging, local deskew
circuits, and clock tuning. The talk presents the methods as well as their
relative advantages and drawbacks. Several cores running at the on
same frequency do not need to maintain the same phase. Power can be
saved by using simpler “multi-synchronous” clocking, which is prone to
variations due to slow drifts of voltage, temperature and PLL phase
errors. Adaptive phase compensating interfaces replace synchronizers
in moving data from one domain to another, enabling zero-latency fullspeed data transfers.
Various options are reviewed for data transfers between domains that
operate at different frequencies: Two-clock (a.k.a asynchronous) FIFOs,
predictive synchronizers, and locally-delayed latching synchronizers.
Their power implications will also be discussed.
Bio on the Next page
Back
Prof’ list
Bio
Ran Ginosar has received BS (EE+CS) at the Technion in 1978 and
PhD (EECS) from Princeton University in 1982. He worked at Bell
Research Labs and subsequently joined the EE and CS departments
at the Technion in Israel. During his tenure there he spent Sabbaticals
at the University of Utah (CS dept) and at Intel Research Lab in
Oregon. He has also spent time at several start-up companies in
various areas of VLSI architecture (imaging and video, medical
devices, and multi-protocol wireless chip-sets).
Ran has conducted research on high speed digital and mixed signal
VLSI. He focused on asynchronous logic and synchronization, as well
as fast clocking, multi-clock domains and Networks-on-Chip.
Back
Prof’ list
Relative Timing and Elastic System Design
By: Ken Stevens
Abstract
Relative timing and its application to elastic designs will be presented.
Relative Timing is a technique for modeling and verifying circuits that
operate under timing constraints. Timing is represented as explicit
logical constraints. Relative signal orderings are maintained based on
delay requirements of a circuit for correct operation. Thus a complete
and sufficient set of timing conditions are formally proven for a protocol
or circuit. The logical nature of the constraints substantially enhances
the ability to verify large designs and models. The constraints are used
to drive synthesis and layout, and are validated through post-layout
static or statistical timing analysis. Relative timing increases the class
of circuits that can be designed and validated, including designs with
non-traditional clocking methodologies. Elastic design provides legolike compensability of integrated circuit modules and can deliver
flexibility in clocking frequencies. Relative timing will be demonstrated
by applying timing flows to some design examples that have more
complicated timing conditions that a traditional clocked pipeline. This
will include an elastic communication network and a simple
desynchronized microprocessor where the clock is replaced with
handshaking circuitry.
Bio on the Next page
Back
Prof’ list
Bio
Kenneth S. Stevens is an Associate Professor at the University of Utah.
He has enjoyed a career that includes both industrial and academic
positions. Prior to Utah, Ken worked at Intel's Strategic CAD Lab in
Hillsboro Oregon for nine years on CAD and circuit research. A highlight
at Intel was the design and implementation of the front end of the
Pentium Processor with asynchronous circuits that operated at 3.5 GHz
when the lead processor's fastest clock speed was 450MHz. Prior to
Intel, Ken was an Assistant Professor at the Air Force Institute of
Technology (AFIT) in Dayton Ohio - the Graduate School for the Air Force
Academy. An ultra low power FFT circuit that was fabricated for space
applications was developed. Ken received his Ph.D. at the University of
Calgary, where
he researched the verification of sequential circuits and systems. Before
that he worked at Fairchild Labs for AI Research and Hewlett Packard
Labs. He developed an asynchronous circuit synthesis methodology
called "burst mode", and designed and fabricated an ultra high
bandwidth communication chip for distributed memory multiprocessors
that won a best paper award. Ken has published in journals and
conferences, has 8 patents, and is a Senior Member of the IEEE. He also
created a successful software startup company, is the developer of the
international spelling checker for emacs (ispell.el) that was given the
GNU copyleft, and serves on program committees and conference
chairmanships. His current research includes novel timing verification
technology that transforms circuit timing into logical expressions, elastic
network fabric and desynchronized pipeline designs, and asynchronous
circuits and systems.
Back
Prof’ list
New Approaches for Statistical
Timing Analysis and Post-Silicon Tuning
By: David Blaauw
Abstract
Process variation has raised considerable concern in recent process
technologies and has fueled extensive research in so-called Statistical
Static Timing Analysis (SSTA) and optimization. The developed
approaches closely mirror that of deterministic STA and aims to
address yield improvement through statistically aware optimizations,
such as gate-sizing. However, the SSTA formulation has proven to be
more complex then hoped and remains a very challenging research
problem. In this presentation, we therefore explore a different approach,
referred to as MC-STA, where we use smart-sampling method to speed
up Monte-Carlo based static timing analysis.
We show that the run time scales favorably with circuit size. Using
criticality aware Latin Hyper-Cube based sampling and Pseudo MonteCarlo sequence generation, we show that MC-STA has a run time
approaching that of SSTA while providing a more versatile solution.
In addition, we also discuss the use of post-silicon tuning to mitigate
process variation. We show how post-silicon tuning requires design
time optimization for efficient implementation, for instance by
performing clustering for Adaptive Body Biasing. We present results on
large benchmark circuit demonstrating the effectiveness of the
proposed approaches.
Bio on the Next page
Back
Prof’ list
Bio
David Blaauw received his B.S. in Physics and Computer Science from
Duke University in 1986, and his Ph.D. in Computer Science from the
University of Illinois, Urbana, in 1991. Until August 2001, he worked for
Motorola, Inc. in Austin, TX, were he was the manager of the High
Performance Design Technology group. Since August 2001, he has been
on the faculty at the University of Michigan as an Associate Professor.
His work has focused on VLSI design and CAD with particular emphasis
on circuit design and optimization for high performance and low power
applications. He was the Technical Program Chair and General Chair for
the International Symposium on Low Power Electronic and Design and
was the Technical Program Co-Chair and member of the Executive
Committee the ACM/IEEE Design Automation Conference.
He is currently a member of the ISSCC Technical Program Committee.
Back
Prof’ list
Fast worst case stimulus pattern selection
based on exhaustive search
By: Sergey Gavrilov
Abstract
The goal of this work is new algorithm and method for automatic
input stimuli and state generation, which satisfies the requirement
of specified output switching with maximal delay. The primary usage
of these algorithms is full custom design without preliminary library
characterization.
This algorithm is father improvement of approach, which is
currently used in the Intel internal tool “Tango”. Tango is used for
device level analysis for several Intel CPU projects.
Bio
The goal of this work is new algorithm and method for automatic input
stimuli and state generation, which satisfies the requirement of
specified output switching with maximal delay. The primary usage of
these algorithms is full custom design without preliminary library
characterization.
This algorithm is father improvement of approach, which is currently
used in the Intel internal tool “Tango”. Tango is used for device level
analysis for several Intel CPU projects.
Back
Prof’ list
Verification practices and challenges in Intel
By: Alon Gluska
Abstract
The growing complexity and time-to-market pressure make the functional
verification of CPUs the major obstacle to the completion of designs in
Intel, as well as in the industry. In this presentation, we will present the
challenges we face in Intel when we verify CPUs. We will briefly present
the major components of a modern CPU, the verification methods and
practices, and the key technologies that need to be delivered to resolve
the main issues. In particular, we will refer to formal verification
technologies, abstractions of designs, power states and unique
aspects of microcode and firmware verification.
Bio
M.Sc in computer engineering and M.Sc. in electrical engineering from
the Technion. I worked for IBM on the development of verification
technologies for 7 years, and I have been working for Intel on the
verification of CPUs for over 10 years. During this time I worked on the
verification of Pentium 4, and on multiple generations of the Pentium-M
(Centrino) line, including Banias, Dothan, Yonah (Core Duo) and Merom
(Core 2 Duo).
Back
Prof’ list