RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (1000) < [1] 2 3 4 5 6 ... >   
  Topic: Official Uncommonly Dense Discussion Thread< Next Oldest | Next Newest >  
keiths



Posts: 2195
Joined: Jan. 2006

(Permalink) Posted: Oct. 06 2006,11:36   

Davey digs himself in deeper:
 
Quote (DaveScot @ Oct. 06 2006,12:38)
Oh Goody! Now 2ndClass wants to be the next clown I knock down. These people have not done any hardware design. They have not drawn schematics for many complex digital designs then sat thousands of hours in the drivers’s seat of a logic analyzer and oscilloscope debugging their own designs. That and programming is all I did for almost 25 years and I was really, really good at it.

2ndclass sticks his foot in his mouth thusly:

   
Quote
- “Simulations of gate logic” are only done with boolean logic. What other kind of logic do you think is simulated?

- Contrary to your strawman, nobody here said that analog considerations aren’t important. They just aren’t part of gate-level modelling.


But this article in EDN says:

The most common form of logic simulation is event-driven, in which the simulator sees the world as a series of discrete events. When an input value on a primitive gate changes, the simulator evaluates the gate to determine whether this change causes a change at the output and, if so, schedules an event for some future time (Figure 3).

Most event-driven logic simulators allow you to attach minimum, typical, and maximum delays to each model (Figure 4). When you run the simulator, you can select one of these delay modes, and the simulator uses that mode for all of the gates in the circuit. Also, some simulators allow you to select one delay mode as the default and then force certain gates to adopt another mode. For example, you might set all the gates in your datapath to use minimum delays and all the gates in your control path to use maximum delays, thereby allowing you to perform a “cheap and cheerful” timing analysis.

What a dope. There’s much more at the EDN link.


Dave,

It's painfully obvious to everyone (including the folks at UD) that you're bluffing.  Why keep pretending to understand how chip simulation is done?  You're just making yourself look ridiculous.  Perhaps you've done some board design, but you clearly don't understand chip design methodology.

First of all, do you really think that a gate-level simulation becomes non-boolean just because gate delays are added?

Secondly, in 20 years of chip design (microprocessors, ASICs, and FPGAs) I have never used, nor seen anyone use, nor heard about anyone using a gate-level simulation for timing analysis.  Can you do it?  Of course.  But why would you?  It's the wrong tool for the job, and there are much better tools available.

What's wrong with using a gate-level simulation for timing analysis?  Here are two biggies:

1)  Your vectors (or testbenches) have to achieve 100% path coverage (not just node coverage) to guarantee that you haven't missed any critical paths.  Not only is this impossible to achieve (or even to approach) for most designs, it also means that your verification suite has to be nearly complete before you can do significant timing analysis.  A stand-alone timing analysis tool has no such limitations and requires no vectors or testbenches.

2) To isolate a timing path using gate-level simulation, you have to a) produce a failure, b) debug from the failure back to the critical path, c) fix the path, and d) resimulate to find the next failure.  Step (b) in particular takes a huge amount of engineering time, all for the sake of highlighting one or a handful of critical paths.

A timing analyzer, by contrast, identifies hundreds or even thousands of critical paths all at once.  The engineer simply has to fix the paths and rerun the analyzer.

This is why the EDA vendors sell timing analysis tools, and it's why everyone buys them instead of trying to piggyback "cheap and cheerful" timing analysis onto their functional simulations.

We've pretty much run that topic into the ground, unless you aren't embarrassed enough yet.  Now let's hear why your definition of irreducible complexity is the right one, as opposed to Behe's and Dembski's.

--------------
And the set of natural numbers is also the set that starts at 0 and goes to the largest number. -- Joe G

Please stop putting words into my mouth that don't belong there and thoughts into my mind that don't belong there. -- KF

  
  29999 replies since Jan. 16 2006,11:43 < Next Oldest | Next Newest >  

Pages: (1000) < [1] 2 3 4 5 6 ... >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]