A Back-End Design Flow for Single Chip Radios
Download
Report
Transcript A Back-End Design Flow for Single Chip Radios
Progress on METRICS: A System for Automatic Recording
Annual Review
and Prediction of Design Quality Metrics
December 2000
Andrew B. Kahng and Stefanus Mantik
Calibrating Achievable Designs
Abstract
• Motivation: complexity of the design process
The GSRC METRICS project seeks to improve design productivity through design
process optimization infrastructure. The METRICS system (i) unobtrusively gathers
characteristics of design artifacts, design process, and communication during system
development effort, and (ii) analyzes and compares that data to analogous data from
prior efforts. METRICS infrastructure consists of
– standard metrics schema, naming and semantics, along with metrics transmittal
capabilities embedded directly into EDA tools or into wrappers around tools;
– a metrics data warehouse and metrics reports; and
– data mining and visualization capabilities for project prediction, tracking, and
diagnosis.
Industry-standard components and protocols (http, XML, Java, Oracle8i, etc.) lead to a
robust, portable (and open-source) system prototype. We have extended METRICS to
include (i) the collection of design flow information for use in flow optimization, and
(ii) integration with datamining tools to allow automatic generation of design and flow
QOR predictors. Our flow optimization experiments address optimization of
incremental multilevel FM partitioning in an incremental (ECO-oriented)design flow.
We also demonstrate QOR predictors that are generated automatically from the
METRICS data warehouse by the “Cubist” datamining tool for commercial placement,
clock tree generation, and routing tools.
Capo Placer
Placed DEF
LEF
DEF
Placed DEF
QP ECO
Legal DEF
Cadence SLC Flow
QP
M
E
T
R
I
C
S
Congestion
Map
WRoute
Routed DEF
Incr. WRoute
CTGen
• Design process includes other aspects not like any
“flow/methodology” bubble chart
– Must measure to diagnose, and diagnose to improve
• Many possibilities for what to measure
• Unlimited range of possible diagnoses
– User performs same operation repeatedly with nearly identical inputs
• tool is not acting as expected
• solution quality is poor, and knobs are being twiddled
– On-line docs always open to particular page
• command/option is unclear
LEF,
GCF, TLF
• TASK_NO: the number of times a given current task is executed
within the same execution of its parent task
• FLOW_SEQUENCE: records of the execution hierarchy starting
from the first task
S
S
Clocked DEF
T1
T1
T1
T2
T2
T2
QP Opt
Optimized DEF
T2
WRoute
T3
T3
T3
No
1
2
3
4
5
6
7
Task Task No Flow Sequence
T1
1
1
T2
1
1/1
T1
2
2
T2
1
2/1
T3
1
2/1/1
T2
2
2/2
T3
1
2/2/1
Routed DEF
Final DEF
Pearl
CongestionAnalysis
Integration with Datamining Tools
Servlet
SQL
– Value well-defined only in context of overall design process
Servlet
Tables
DB
request
SQL
Tables
results
Datamining SQL Datamining
Interface results
Tools
Examples applications:
• Parameter sensitivity analysis: analysis of which input parameters
have the most impact on tool results
• Field of use analysis: analysis of the (runtime, capacity, quality) limits
at which the tool will break
• Process monitoring: analysis of potential or likely outcomes of the
current design process (while the process is still running)
• Resource monitoring: analysis of resource demands for given tasks
F
F
Experiments in Placement Domain
Example flows with incremental partitioning:
foreach testcase
foreach DeltaI
foreach CPUbudget
foreach breakup (n = number of parts)
Icurrent = Iinitial
Scurrent = Sinitial
for i = 1 to n
Inext = Icurrent + deltaI_i
run incremental multilevel FM partitioner on I next to produce Snext
if CPU current > CPU budget then break
Icurrent = Inext
Scurrent = Snext
save number of cuts
Example rules:
• If (27401 < num_edges £ 34826 & 143.09 < cpu_time £ 165.28 &
perturbation_delta £ 0.1) then num_inc_parts = 4 &num_starts = 3
• If (27401 < num_edges £ 34826 & 85.27 < cpu_time £ 143.09 &
perturbation_delta £ 0.1) then num_inc_parts = 2 &num_starts = 1
Prediction Results from Datamining
Predicted CPU Time (secs)
DEF
• Value of CAD tools improvement is not clear
Design Flow Metrics Recording
Current Metricized Flows
Mixed UCLA-Cadence Flow
– Ability to make silicon has outpaced ability to design it
– Key question: “Will the project succeed, i.e. finish on schedule and
under budget while meeting performance goals?”
– SOC design requires an organized, optimized design process
Conclusions
• METRICS ability to record design flow metrics
– the sequential design flows model allows the METRICS system to keep track
of tool invocations within a given flow, even with arbitrary backward edges or
self-loops in the task sequence
25k
20k
15k
• METRICS integration with datamining tools
10k
5k
5k
10k
15k
20k
Actual CPU Time (secs)
25k
Sample rules:
if (num_nets £ 7332) then CPU_time = 21.9 + 0.0019 num_cells + 0.0005 num_nets + 0.07
num_pads - 0.0002 num_fixed_cells
if (num_overlap_lyr £ 0 & num_cells £ 71413) then CPU_time = -15.6 + 0.0888 num_nets 0.0559 num_cells - 0.0015 num_fixed_cells - 6 num_overlap_lyr - 1 num_routing_lyr
– different predictors (CPU time, field of use, etc.), monitors, as well as flow
optimizations become possible
– a very accurate rule-based prediction is obtained when there is at least one
representative run in the training set
– a high accuracy prediction of new design obtained when there isn’t any
representative run in the training set
• Ongoing research:
– address the “need for representative” issue
– generate more predictors that can be used in real design flows