PPT presentation - UCSD VLSI CAD Laboratory

Download Report

Transcript PPT presentation - UCSD VLSI CAD Laboratory

METRICS Standards and
Infrastructure for Design
Productivity Measurement and
Optimization
Andrew B. Kahng and Stefanus Mantik
UCLA CS Dept., Los Angeles, CA
DAC 2000 Birds-of-a-Feather Meeting
June 7, 2000
Meeting Agenda
 Welcome
 Introduction and Motivations for METRICS
 METRICS system architecture
 Discussion:
Requirements for standard METRICS
system architecture
 METRICS standards
 Discussion:
Potential standard METRICS
names/semantics
 Open Discussion
 Conclusion: Action Items going forward
Motivations
 How do we improve design productivity ?
 Does our design technology / capability yield better
productivity than it did last year ?
 How do we formally capture best known methods,
and how do we identify them in the first place ?
 Does our design environment support continuous
improvement of the design process ?
 Does our design environment support what-if /
exploratory design ? Does it have early predictors of
success / failure?
 Currently, there are no standards or infrastructure for
measuring and recording the semiconductor design
process
Purpose of METRICS
 Standard infrastructure for the collection and the
storage of design process information
 Standard list of design metrics and process metrics
 Analyses and reports that are useful for design
process optimization
METRICS allows: Collect, Data-Mine,
Measure, Diagnose, then Improve
Potential Data Collection/Diagnoses
 What happened within the tool as it ran? what was
CPU/memory/solution quality? what were the key
attributes of the instance? what iterations/branches
were made, under what conditions?
 What else was occurring in the project? spec
revisions, constraint and netlist changes, …
 User performs same operation repeatedly with nearly
identical inputs
 tool
is not acting as expected
 solution quality is poor, and knobs are being twiddled
Benefits
 Benefits for project management
 accurate
cycle

up front estimates for people, time, technology, EDA
licenses, IP re-use...
 accurate


resource prediction at any point in design
project post-mortems
everything tracked - tools, flows, users, notes
no “loose”, random data or information left at project end
(no more log files!!!)
 Management

console
web-based, status-at-a-glance of tools, designs and
systems at any point in project
 Benefits for tool R&D
 feedback
on the tool usage and parameters used
 improve benchmarking
Current Status
 Complete prototype of METRICS with industry
standard components (Oracle, Java, XML, HTML, etc.)
 Metricized place and route runs on 100+ designs;
seeking access to Synopsys regression suite data
 Complete metricization of Cadence system-level
timing flow
 Metricization of synthesis and Verilog simulation
tools
 Initial feedback from industry on METRICS standards
 Attempting to spec Intel requirements for METRICS
system
METRICS System Architecture
METRICS System Architecture
Tool
Transmitter
Tool
Tool
Transmitter
Transmitter
wrapper
Java
Applets
API
XML
Inter/Intra-net
Web
Server
DB
Reporting
Data
Mining
Metrics Data Warehouse
Transmitter
Wrapper-based
 API-based
 Perl scripts that wrap log
files and STDOUT
 C/C++ library that is
embedded inside tools
 Use existing log files (minor
or no change in tool codes)
 Does not depend on log files
 Completely dependent on
log files
 Metrics list is limited to the
available reported data
 Need extra process
 Data are obtained directly
from tools
 Require some changes in
tool codes
METRICS Performance
 Transmitter
 low


CPU overhead
multi-threads / processes – non-blocking scheme
buffering – reduce number of transmissions
 small

memory footprint
limited buffer size
 Reporting
 web-based

platform and location independent
 dynamic


report generation
always up-to-date
example: correlation plot – understand the relation
between two metrics and find the importance of certain
metrics to the flow
Example Reports
donkey 2%
rat 1%
bull 2%
100
LVS %
98
hen 95%
% aborted per machine
96
94
92
90
synthesis ATPG
22%
20%
postSyntTA
13%
placedTA
physical
7%
18%
BA 8%
funcSim
7%
LVS 5%
% aborted per task
88
0
100
200
300
400
time
LVS convergence
500
600
Current Results
 CPU_TIME = 12 + 0.027 NUM_CELLS (corr = 0.93)
 More plots are accessible at
 http://xenon.cs.ucla.edu:8080/metrics
COM-Based Alternative
 EDA tools:
 provide
COM interface for capturing internal
information
 add information collections (“counters”) inside the
tools
 METRICS transmitter:
 get
information via the COM interface
 format the data in XML, encrypt the message, and send
it to the server
 Benefit:
 allow
tools
independent development for transmitter and
Discussion on
METRICS Architecture
METRICS Standards
METRICS Standards
 Standard metrics naming across tools
name  same meaning, independent of tool
supplier
 generic metrics and tool-specific metrics
 no more ad hoc, incomparable log files
 same
 Standard schema for metrics database
 Standard middleware for database interface
 For complete current lists see:
http://vlsicad.cs.ucla.edu/GSRC/METRICS
Examples of Metrics
Generic Tool Metrics
tool_name
tool_version
tool_vendor
compiled_date
start_time
end_time
tool_user
host_name
host_id
cpu_type
os_name
os_version
cpu_time
string
string
string
mm/dd/yyyy
hh:mm:ss
hh:mm:ss
string
string
string
string
string
string
hh:mm:ss
Placement Tool Metrics
num_cells
num_nets
layout_size
row_utilization
wirelength
weighted_wl
integer
integer
double
double
double
double
Routing Tool Metrics
num_layers
integer
num_violations integer
num_vias
integer
wirelength
double
wrong-way_wl double
max_congestion double
Partial list of metrics being collected now in Oracle8i
Levels of Metrics
Enterprise
level
Project level
Tool/Process level
METRICS Site
 http://vlsicad.cs.ucla.edu/GSRC/METRICS
 Complete list of proposed metrics
 Source codes for METRICS server and API
 List of presentation on METRICS
 Link to various sites related to METRICS
Discussion on
METRICS Standards