Poster on METRICS() - UCSD VLSI CAD Laboratory

Download Report

Transcript Poster on METRICS() - UCSD VLSI CAD Laboratory

METRICS: A System Architecture for
Design Process Optimization
Andrew B. Kahng and Stefanus Mantik
Abstract
We describe the architecture and prototype implementation of METRICS, a system aimed at improving
design productivity through new infrastructure for design process optimization. A key precept for
METRICS is that measuring a design process is a prerequisite to learning how to optimize that design
process and continuously achieve maximum productivity. METRICS, therefore, (i) gathers characteristics of
design artifacts, design process, and communication during system development effort, and (ii) analyzes
and compares that data to analogous data from prior efforts. METRICS infrastructure consists of three
parts: (i) a standard metrics schema, along with metrics transmittal capabilities embedded directly into EDA
tools or into wrappers around tools; (ii) a metrics data warehouse and API for metrics retrieval; and (iii) data
mining and visualization capabilities for project prediction, tracking, and diagnosis.
Salient aspects of METRICS include the following. First, a standard metrics schema, along with standard
naming and semantics, allows a metric from one tool to have the same meaning as the same metric from
another tool from a different vendor. Second, transmittal APIs that are easily embeddable within tools allow
freedom from the "log files" that currently provide only limited visibility into EDA tools. With appropriate
security and access restrictions, these APIs can prevent loss of proprietary information while yet enabling
detailed tracking of the design process. Third, at the heart of METRICS is a centralized data warehouse
that stores metrics information. Several means of data retrieval and visualization (e.g., web-based project
tracking and prediction) afford user flexibility. Finally, industry-standard components and protocols (http,
XML, Java, Oracle8i, etc.) are used to create a robust, reliable system prototype.



use API to send the available metrics
example: standard-cell placer using Metrics API
 < 2% runtime overhead
even less overhead with buffering
 Design process includes other aspects not like any “flow/methodology”
bubble chart

must measure to diagnose, and diagnose to improve
 Many possibilities for what to measure
 solution: record everything, then mine the data
 Unlimited range of possible diagnoses
 User performs same operation repeatedly with nearly identical inputs
tool is not acting as expected
solution quality is poor, and knobs are being twiddled



On-line docs always open to particular page
command/option is unclear

child process handles transmission while parent
process continues its job
 METRICS Reporting
 Web-based



Tool
Tool
xmitter

Tool
xmitter
xmitter
Java
Applets
Wrapper or
embedded

XML
Java
Servlet
Server
SQL
Oracle8i
Reporting
3rd Party
Request
Graphing
Tool
Data
(Excel,Lotus)
Wrapper
Future implementation
Data-Mining
XML
Metrics Data Warehouse
Example of METRICS XML, API and Wrapper
Request



Generic Tool Metrics
WRoute
Routed DEF
Incr. WRoute
Final DEF
CongestionAnalysis
Congestion
Map
M
E
T
R
I
C
S




Oracle8i
Data
plot
data
Local
Graphing
Tool
(GNUPlot)
Congestion vs. WL
Abort by Task
tool_name
tool_version
tool_vendor
compiled_date
start_time
end_time
tool_user
host_name
host_id
cpu_type
os_name
os_version
cpu_time
char
char
char
mm/dd/yyyy
hh:mm:ss
hh:mm:ss
char
char
char
char
char
char
hh:mm:ss
Placement Tool Metrics
num_cells
num_nets
layout_size
row_utilization
wirelength
weighted_wl
integer
integer
double
double
double
double
Routing Tool Metrics
num_layers
integer
num_violations integer
num_vias
integer
wirelength
double
wrong-way_wl double
max_congestion double
 Completion of METRICS server with Oracle8i,
QP ECO
Legal DEF
same name  same meaning, independent of tool supplier
generic metrics and tool-specific metrics
no more ad hoc, incomparable log files
Conclusions and Ongoing Work
Placed DEF
SQL
Standard schema for metrics database

Capo/Cadence Flow
Capo Placer
Java
Servlet
Example of Reports
 METRICS Standards
 Standard metrics naming across tools
<? xml version=“1.0” ?>
/** API Example **/
<METRICSPACKET>
int main(int argc, char * argv[ ] ) {
<REQUEST>
...
<TYPE> TOOL </TYPE>
toolID = initToolRun( projectID, flowID );
<PROJECTID> 173 </PROJECTID>
...
<FLOWID> 9 </FLOWID>
printf( “Hello World\n” );
<PARAMETER> 32 </PARAMETER>
sendMetric( projectID, flowID, toolID,
</REQUEST>
“TOOL_NAME”, “Sample” );
<METRICS>
sendMetric( projectID, flowID, toolID,
<PROJECTID> 173 </PROJECTID>
“TOOL_VERSION”, “1.0” );
<FLOWID> 9 </FLOWID>
...
<TOOLID> P32 </TOOLID>
terminateToolRun( projectID, flowID, toolID );
<DATETIME> 93762541300 </DATETIME>
return 0;
<NAME> TOOL_NAME </NAME>
}
<VALUE> CongestionAnalysis </VALUE>
</METRICS>
## Wrapper example
</METRICSPACKET>
( $File, $PID, $FID ) = @ARGV;
$TID = `initToolRun $PID $FID`;
open ( IN, “< $File” );
while ( <IN> ) {
if ( /Begin\s+(\S+)\s+on\s+(\S+.*)/) {
system “sendMetrics $PID $FID $TID TOOL_NAME $1”;
system “sendMetrics $PID $FID $TID START_TIME $2”;
}
...
}
system “terminateToolRun $PID $FID $TID”;
LEF
Inter/Intra-net
Report
Inter/Intra-net
Tool wrapper
DEF
Request
Report
XML
EDA
Tool
API
understand the relation between two metrics
find the importance of certain metrics to the flow
always up-to-date
WEB
Browser
Inter/Intra-net
EDA
Tool
platform independent
accessible from anywhere
Example: correlation plots created on-the-fly

Won’t break the tool on transmittal failure

value well-defined only in context of overall design process
Web
Browsers
Low overhead


METRICS System Architecture
 METRICS Transmitter
 No functional change to the tool

Motivations
 Value of CAD tools improvement not clear
Java servlet, and XML parser
Initial transmittal API in C++
METRICS wrapper for Cadence P&R tools
Simple reporting scheme for correlations
Work with EDA, designer community to
establish standards


tool users: list of metrics needed for design
process optimization
tool vendors: implementation of the metrics
requested with the standardized naming
LVS Convergence
Observations from experience with a previous prototype
 Implemented by OxSigen LLC (Fenstermaker, George, Thielges) in




 Improve the transmitter
 add message buffering
 “recovery” system for network / server failure
 Extend METRICS system to include project
management tools, email communications, etc.
 Additional reports, data mining


Siemens Semicustom Highway flow
The METRICS system must be non-intrusive. The best choice for the
system is if it is embedded in the tools.
Big brother type issues must be spelled out clearly at the beginning, and
buyoff from user advocates must be considered. All data must be
anonymized and any attempt to profile or quantify individual performance on
a project is dangerous (but useful).
There is still a very big problem with flows. Ideally, the flow should be
standardized, with “Makefile” type build environment for batch chip creation.
There is no obvious common way to handle interactive tools yet, so we must
be able to metricize flows in a standard way (which requires standard flows).
The CAD / design community must get together to standardize (or better
educate) people on flow terminology, especially now that so many new
hybrid tools are emerging which combine traditional flow steps. If we simply
had a standard set of agreed upon milestones that occur during the lifecycle
of a design, we could start to do accurate and more worthwhile
benchmarking and prediction.
There is still a very big problem with standardized data management
(version control), i.e., lots of custom codes to work around source code
control systems in real world environments.
Project management tools need to be more standardized and widely used.
These tools act like metrics transmitters for project-related information such
as time allotted for certain tasks. This is critical for prediction of projectrelated details (how long to completion from this point, etc.).
DARPA