downloadGroupGroupnoun_press release_995423_000000 copyGroupnoun_Feed_96767_000000Group 19noun_pictures_1817522_000000Member company iconResource item iconStore item iconGroup 19Group 19noun_Photo_2085192_000000 Copynoun_presentation_2096081_000000Group 19Group Copy 7noun_webinar_692730_000000Path
Skip to main content
July 5, 2022

Excellicon – Managing Critical Timing Constraints Across Design and Verification to Reduce Timing and Signal Integrity Issues

Excellicon, one of the ESD Alliance’s newest members, provides tools for designers to connect early design to its physical representation in order to gain insights into partitioning schemes for optimal floorplans and timing.

Rick Eram, Excellicon’s vice president of sales and operations, and I recently discussed ways for designers to compare the physical floorplan against the actual register transfer level (RTL) code and constraints. We also covered chip design trends and complexity, and a few of his predictions for the design and verification market.

Smith: What trends are you hearing about from chip designers and verification engineers?

Eram: Static analysis techniques have become more prevalent for verification. This is due to the sheer complexity of chips and the fact that the traditional dynamic techniques cannot handle the massive amounts of data or they are too costly in terms of long run times. Because of these limitations, modern designs are driving the adoption of static techniques for performing functional and power verification. The demand for better abstraction and static analysis techniques will keep growing as these verification challenges compound.

Smith: Chip designs are becoming very complex and at the same time are using more and more IP. How does this impact the development and management of timing constraints? Have automation tools kept up with this demand?

Eram: For many years, designers created timing constraints manually. Fast forward to today, though, and the number of constraints and exceptions for timing requirements are so demanding that it is not humanly possible to use a manual approach. Additionally, many IP blocks either do not have associated timing constraints, or the timing constraints are written totally out of context of the design where the IP is used. Today’s massive gate count complexity and the increasing use of IP creates a perfect storm scenario for timing failures due to the use of non-automated techniques in managing timing constraints.

Automation and abstraction are the only way to manage the ever-increasing complexity of timing constraints and propagation of timing information throughout the chip. This shift is analogous to the mid-1980s to early 1990s, when it was unthinkable to allow an automation tool to generate gate-level designs. Yet today, no one would consider trying to manually layout a complex chip containing millions of logic gates.

Excellion logoManipulation of timing constraints through all layers of hierarchy from IP to the top level requires  promotion and/or demotion. Such operations are essentially unmanageable manually and automation is the only option to ensure correctness and completeness. Timing Equivalence Checking (TEC) is now essential to make sure all timing constraints remain accurate and as intended as timing criteria is propagated through all layers of hierarchy, or in case of design revisions, the timing constraints are properly matched up to the new version of RTL .

Another example is area budgeting to enable hierarchical static timing analysis (STA). In the old days, dealing with timing budgets meant simply leaving enough margins for timing windows. Today, the demand for higher frequencies and throughput requires rigorous analysis of timing budgets to make sure timing is met for each given time window for each block. This starts from the initial design and must continue as the chip moves throughout design and integration as well as timing closure steps to ensure timing budgets continue to be met as the design is completed.

Smith: Have these challenges changed significantly from 10 or even five years ago? What accounts for the change?

Eram: Each new design cycle results in increased chip sizes and gate counts. Additionally, the number of clocks and complexity of clocks are exploding to meet the increased design demands. This is driving the need for more integration and increased use and reuse of IP as a means to bring down design cost and manage complexity. In fact, many existing chip designs become the IP for the next-generation design.

By the same token, shrinking geometries are also contributing to the need for more analysis and verification to avoid timing or signal integrity issues that can cause chip failure. Manipulation of timing constraints, as well as increasing complexity, requires more stringent timing analysis.

Smith: Correct by construction is a term popularized years ago but not used much today. What is your definition of correct by construction? Is it a feasible approach to chip timing verification and design?

Eram: Correct by construction implies a methodology where connectivity and continuity of all data and information throughout the design process is maintained to minimize the chance of human errors and process-related inconsistencies.

imageIn the context of timing constraints, it means using the actual design description to ensure the correct timing constraints are written for downstream tools and flows. Basically, the timing constraints must be directly connected to the actual underlying hardware description language (HDL) code so the timing constraints are true representations of the actual design. It ensures no mismatches or inadvertent misses, and in some cases, the correlation between HDL and timing constraints can also highlight design issues  as a designer implements functional design. Correct by construction essentially eliminates the possibility of different interpretations among team members that often leads to confusion and design gaps and inevitably results in design errors and system-on-chip (SoC) malfunction.

Formal analysis is a key component of correct by  construction. As designers analyze and configure the design, they can check and validate the proper functionality. Formal is a powerful static technique and can have a significant impact on both analysis and verification when properly managed and selected for analyzing the design performance.

Smith: What are your predictions for the semiconductor industry and design and verification in particular?

Eram: The complexity will continue to increase and there will be a need for more innovative approaches to designing ASICs in all aspects of the design. Verification will be more complex as a result and will require new design and verification techniques to complement each other and to increase predictability.

For example, we see power becoming more of an issue as complexity grows and design requirements demand lower power designs. The answer will be further optimization of every aspect of the chip including all power consumers on the chip. Clock power is one of the major sources of on-chip power consumption. The traditional balancing of clock trees can result in wasted power. An automated approach to optimizing the clock tree would save power while at the same time ensuring that target performance is achieved.

VerifiedFor instance, balancing the skew groups independent of one another will inevitably reduce power consumption by ensuring that different branches of clock tree are optimized according to the demand they must meet driving a particular branch. This helps avoid using larger drivers that consume more power when global clock tree balancing is used.

We believe earlier design planning will become crucial to addressing many design metrics, such as power and timing. As a result, more automation will continue to be needed to replace manual approaches. Additionally, the need for predictability will force designs to be analyzed at earlier stages to avoid delays and iterations downstream.

As predictability and connectivity of design information become essential, front-end design planning must become more physically aware and factor in implementation impact earlier in the process. As this practice becomes more ingrained, the early generation of timing constraints will result in capturing much of the critical timing information in the RTL code, yielding design constraints that can be used for downstream analyses such as static power checking, clock domain crossing analysis, and others. Of course when new automation technologies are introduced, a system of checks and balances is needed to ensure connectivity and continuity of all design data from RTL to final physical design and vice versa; verification of physical design back to the RTL that best describes the design intent.

About Rick Eram

Rick EramRick Eram, Excellicon’s vice president of sales and operations, has an extensive background in the development of efficient and effective teams addressing customer needs on business and technical fronts. He has nearly 30 years of hands-on experience in the EDA industry designing tools and has been directly involved in developing and managing of engineering teams as well as managing sales and marketing campaigns. Eram’s work was instrumental in two IPOs during his tenure at Analogy (now Synopsys) and Magma (now Synopsys). At Atrenta (now Synopsys), he developed a marketing strategy that was adopted company-wide.