Author: ABBdriveX

AQ: Simulator history

Power electronics has always provided a special challenge for simulation. As Hamish mentioned above, one of the problems encountered is inductor cutsets, and capacitor loops that lead to numerical instability in the simulation matrices.

In the 80s, Spice ran so slowly that is was not an option unless you wanted to wait hours or days for results, and frequently it failed to converge anyway. It was never intended to handle the large swings of power circuits, and coupled with the numerical problems above, was just not a feasible approach.

Ideal-switch simulations were used with other software to get rid of many of the nonlinearities of devices that slowed simulation down, but Spice really hated ideal switches as it would try to converge on the infinite slope edges.

Three universities started writing specialized software for converter simulation to address this shortcomings of Spice. Virginia Tech had COSMIR, which I helped write with a grad student, Duke University had the program which later became Simplis, and the University of Lowell had their program, the name of which I don’t recall (anyone remember?).

All of these programs started before Windows came along, and they were fast and efficient. With windows, the programming overhead to maintain programs like these moved beyond the scope of what university research groups in power electronics could handle. Only the Duke program survived, with Ron Wong leading the effort at a private company. The achievements of Simplis are remarkable, but it is a massive effort to keep this program going for a relatively small marketplace (power supply companies are notoriously cheap, so the potential market does not get realized), and that keeps the price quite high. If you can afford it, you should have this program.

Spice now runs at a reasonable pace on the latest PCs, so it is back in the game. LT Spice is leading the charge because it is free, and the models are relatively rugged. Now that speed is less of a factor, you can put real switches in, and Spice can handle them in a reasonable amount of time. (Depending on your definition of “reasonable”.)

PSIM was another ideal switch model, and they eliminated the convergence headaches that plagued all the other programs by not having convergence at all. You just cut the step size down to get the accuracy you needed, and this worked fine for exploring power stages and waveforms, but was not good for fast transient feedback loops. As the digital controller people quickly realized, the resolution on the PWM output needed to avoid numerical oscillations is very fine, and PSIM couldn’t handle that without slowing down too much.

When I left Virginia Tech, I felt the bulk of the industry needed a fast simulation and design solution so engineers did not have to add to their burdens with worrying about convergence and other problems. This is a hardware-driven field, and we all have our hands full dealing with real life blowups that simulation just doesn’t begin to predict.

I have observed in teaching over the years that engineers in a hurry to get to the hardware have very little tolerance for waiting for simulation. If you are building a well-known topology, about 2 seconds is as long as they will wait before they become impatient.

This is the gap that POWER 4-5-6 plugs. The simulation is practically instantaneous, and the program has no convergence issues so you design and simulate rapidly before moving to a breadboard. It is intended for the working engineer who is under severe time pressure, but would like some simulation to verify design integrity.

AQ: Popularization of SPICE

I am currently writing a bullet point history of the popularization of SPICE in the engineering community. The emphasis is on the path SPICE has taken to arrive on the most engineering desktops. Because of this emphasis, my history begins with the original Berkeley SPICE variants, continues onto PSpice (its limited, but free student version made SPICE ubiquitous) and culminates with LTspice (because, at over three million downloads, it has reached many more users than all other SPICE variants combined).

I have contacted Dr. Laurence Nagel (the father of Berkeley SPICE) and Mike Engelhardt (LTspice) in order to verify the accuracy of the historical account (haven’t had a chance to fold in Dr. Nagel’s corrections yet), but I am lacking solid information about the beginnings of PSpice (I don’t even know who the technical founders of MicroSim were). Ian Wilson was an early technical V.P. Also, I am not sure what the PSpice acronym means. (Seems to me that it started out as uPspice?)

Here is what I have recently found about PSpice (more info appreciated):

User’s Guide to PSpice, Version 4.05, January 1991
From Chapter 1: INTRODUCTION, Section 1.1 Overview, starting with paragraph 2 (page 3):

“PSpice is a member of the SPICE family of circuit simulators. The programs in this family come from the SPICE2 circuit simulation program developed at the University of California at Berkeley during the early 1970’s. The algorithms of PSICE2 were considerably more powerful and faster than their predecessors. The generality and speed of SPICE2 led to its becoming the de facto standard for analog circuit simulation. PSpice uses the same numeric algorithms as SPICE2 and also conforms to the SPICE2 format for input and output files. For more information on SPICE2, see the references listed in section 13.2.1.4 (page 427, especially the thesis by Laurence Nagel.

“PSpice, the first SPICE-based simulator available on the IBM-PC, started being delivered in January of 1984.

“Convergence and performance is what sets PSpice apart from all the other ‘alphabet’ SPICEs. Many SPICE programs became available on the IBM-PC around mid-1985, after Microsoft released their FORTRAN complier version 3.0. For the most part, these SPICEs are little modified from the U.C. Berkeley code. Using benchmark circuits, we find that PSpice runs anywhere from 1.3 to 30 times faster than our imitators. In the area of convergence, PSpice has a two-year lead in improving convergence and a customer base that is larger than all of the other SPICE vendors combined (including those SPICEs offered for workstations and mainframes). This larger customer base provides more feedback, sooner, than any other SPICE program is likely to receive.”

From Chapter 1: INTRODUCTION, Section 1.4 Standard Features, last paragraph (page 7):

“PSpice, version 3.00 (Dec. 1986) and later, is a complete re-write of the simulator into the ‘C’ pro-gramming language. It is not a version of SPICE3, from U.C. Berkeley, which is also written in ‘C’. MicroSim has overhauled the data structures and code, however the analog simulation algorithms are similar and the numeric results are consistent with SPICE2 and SPICE3. Having the simulator re-written in ‘C’ allows faster development, allowing our team to reliably modify and extend the simulator in sev-eral directions at once.”

From the January 1987 Newsletter: PSpice went from version 2.06 (Fortran) to version 3.00 (C). Speed increased by 20%. PSpice 3.01 (Dec 86) introduced the non-linear Jiles and Atherton core model.

From the April 1987 Newsletter: PSpice 3.03 (Apr 87) introduced ideal switches.

From the July 1991 Newsletter: PSpice announced Schematics at the June 1991 Design Automation Conference. (Became available when PSpice 5.0 shipped in July 91?)

Solving Differential Equations with Mic

AQ: Cross regulation for multiple outputs

Cross regulation is a very important component of multiple outputs. This can be done in several ways: transformer coupling, mutually coupled output filter chokes (forward-mode) and/or shared output sensing voltages/currents. All, of which, are impossible to model. I have tried them all.

I have sort of written of the first two off, since it is under the control of external vendors, which make their own decisions as to their most cost effective solutions. At best, transformer solutions yield a +/- 5 percent regulation, and can be many times much worse. .Coupled inductors yield a much better cross regulation, but the turns ratio is critically important. If you are off by one turn, you loose a percentage of efficiency.

Shared current/voltage cross sensing is so much more common sense. First, choose the respective weighing of the percentage of sense currents from each outputs approximately in proportion to their respective output powers. Keep in mind that, without cross-sensing, the unsensed outputs can be as much +/- 12 % out of regulation. Decide your sense current through you lower sense resistor. Then multiply your percentages by this sense current from your positive outputs. Calculate each output’s resistance to provide that respective current. Try it, you will be amazed. The negative outputs will also improve immensely.

One can visualize this by, if one senses only one output, only the load of that output influences the feedback loop, which, for example, increases the pulsewidth for each increase in load of the heavily loaded output. The lighter, unsensed loads go crazy. By cross sensing, the lighter loads are more under control and the percent of regulation of the primary load is loosened somewhat.
By sharing the current through the lower sense resistor, you can improve the regulation of every output voltage in a multiple output power supply.

AQ: Automation solution

Automation is a solution:
1. To reduce the manpower & the CTC due to them.

2. Few skilled technicians can run the automated machines smoothly, with much lesser number of errors & faults (as human is not directly controlling every thing & is not burdened with multitasking challenge for extended duration which causes fatigue and hence errors/faults)

3. The power consumption & time to market can be estimated & reduced as machines will be operated on time & can work for longer durations than human beings & they don’t ask for tea/coffee/lunch breaks nor they ask for incentives. (Care for peoples who are maintaining them as well as care for machines which are earning profits for you, by regular maintenance & regular proper inspection of their conditions). Now days very good automatic power mgmt processors/controllers are available which can maintain the power as per the defined conditions as per the load & real time necessity.

4. Train the operators / technicians regularly to keep them up to date with tricks / methods / operations / principals to handle most situations by their own (will reduce the cost of a nonsense manager who is kept to yell, threat & discriminate subordinates and know only one slogan: “do it properly, otherwise, i will ….”). A training department which actually hold the capability to technically train employees from labor to talented engineers is a necessity of this age, as things are not remains just a lifting boulder & digging holes. We are living in an advance age in which we are having many expectations, competition & external pressures.

5. Finance bugs cries for expenses on NRE costs, salaries & treats this investments like invested in a share/equity/debt fund, but, earning from a business & financial mgmt capability must be inline with level & operations performed by the company. Instead of keeping low minds in tech industries, hire the engineers who has reached to an expertise level in automation industry & know the in depth issues occurring in between & underneath to estimate & expect correct values & timelines. Qualified project managers are much more realistic in their approaches, thoughts, assumptions & mentality.

AQ: Creepage in thermal substations

The term creepage distance is specifically associated with porcelain insulators used in the Air Insulated substations. Insulator surface attracts dust, pollution (in industrial areas) and salt (along the sea coast) and these form a conducting layer on the surface of the insulator body when the surface is wet. As long as this surface is dry, there is not much problem. But when it becomes wet during early morning or during winter season the outer surface forms a conducting path along the surface from high voltage terminal to earthed metal fitting at the end of metallic structure and may lead to surface conduction and finally external flash-over. The insulators are provided with Sheds to limit the direct exposer to mist or dew. The protected area of the sheds will not allow formation of continuous conducting layer along the surface of insulator as the part of surface which is under the sheds may not become wet due to mist or dew and this part (length along the bottom surface) of the insulator surface is called protected creepage.

Measurement of corona inception and extinction voltages give a fair idea about the possible flashover even with protected creepage. But these will change under different levels of pollution.
This problem is not present with Composite insulators as the Silicone rubber sheds surface does not allow formation of continuous wet conducting layer as the surface of these insulators is Hydrophobic. Hence higher creepage is not considered for composite insulators.

However air density is also a limiting factor for deciding the creepage of insulators, necessitating higher creepage in case of higher altitudes.
You may have to assess the level of pollution and altitude of substation and select the creepage accordingly.
Medium pollution levels may be 25mm/kV
Very high pollution areas like on the sea coast and chemical and pharmaceutical industrial areas 31mm/kV where the insulators may become expensive alternatively periodic hot line washing is also another solution for cleaning of pollution on insulators.
In case of very high pollution levels GIS may be safe solution (if cost is not an issue)

Thermal substations where there are no electrostatic precipitators may also experience equipment failures due to pollution. Pressurized equipments like SF6 gas circuit breaker experienced external flash-overs during winter months in Northern India The utility was not accepting the theory of insulation failure due to pollution initially but they had to accept the cause of failure as pollution when they had similar failure in the consecutive year during the same winter months and they have resorted to hot line washing since then and there are no more such failures. Sometimes these deposits may not be seen glaringly but failure may happen.

AQ: High AC current inductors

There are several issues at work here. For high AC current inductors, you want to have low core losses, low proximity loss in the windings, and low fringing effects.

At normal frequencies, ferrites are by far the lowest core loss, much better than MPP and other so called “low-loss” materials. So you would like to use them from this aspect.

A toroid gives the greatest winding surface for the magnetic material, letting you use the least number of layers and hence minimizing proximity loss. The toroid also has the advantage of putting all the windings on the outside of the structure, facilitating cooling. This is very important.

However, you can’t easily gap a toroid of ferrite, it’s very expensive.

Some aerospace applications actually cut the ferrite toroid into segments and reassemble them with several gaps to solve the problem. The multiple gaps keep fringing effects low. It might be nice if you could buy a set of toroidal segments so you don’t have to do the cutting because that is a big part of the cost. I don’t know if that is a reasonable thing to do, maybe someone can comment.

Once you go to MPP, the core loss goes up, but the distributed gap minimizes the fringing losses.

The MPP lets you run somewhat higher on current before saturation, but if you have high ac you can’t take full advantage of that due to the core losses.

All these tradeoffs (and quite a few more not mentioned for brevity) are the reason that so many different solutions exist.

AQ: EMI & EMC

EMI/EMC is rather a subjective topic than theoretic, but we shall look at it with start from noise prevention then noise suppression.

Prevention or design in the solution is needed to concentrate on noise making part/component or its mechanism play in the circuit. These are referring to those part and circuit that directly involve in switching, like PFC mosfet and its driver, PFC diode, DC/DC switching mosfet and its driver, and its output diode, do not left out the magnetic part and layout design, bad design will cause ugly switching then give you headache in EMC problem.

Part/component and topology selection is somehow important in which had some level predetermine your EMI/EMS need to take care, like what Stephen had explained; phase-shift is better than none phase shift.

Mosfet would have higher noise at high frequency but it can be somehow compensated or tolerated or trick by driving speed, by using snubber and may be shielding. The output Diode should be carefully selected so that its high frequency noise is within your output noise spec else is an issue, please make sure this noise is not able to be transmitted out as Radiated noise, or it is not couple into your Primary circuit, else it will all the way out to the input AC then transmitted as Radiated noise. Trr is the parameter to look at, sure the lower the best. Anyhow, some snubber (RC or feerite bead,..) shall be determined and add-in.

Noise suppression is what refer to Filter, energy dumping circuit,.. but somehow is basic need, one of it Input filter that give noise isolation between what generated internal in psu not pass to input supply system that could interfere other system/supply environment (EMI), or what noise environment that could enter into your power supply and interfere your power supply (EMS). Input filter is definitely a must for your switching frequency and its sub-harmonics, which is fall into the EMI standard range.

There are many technique to suppress the noise and is depend on what location, nature of the circuit, switch and diode, like what you means by RCD, is not mistaken is refer to RCD that add across the main transformer, DC/DC switcher and typically at the output Diode, you are on right direction with using this snubber around these component.

Shielding may be needed for your main transformer if you have some gap in it (but is not needed if your controller and your switcher is so call good part), but may be needed after you have made some study it the samples.

Good layout always give peaceful mind, whereby noise part have to be some distant away from noise sensitive controller or decision making circuit, decision making connection point have to wise at right termination point that prevent sense the high noise content signal, but if no choice some RC filter is unpreventable, anyhow and mostly RC is commonly located even is known clean in noise to those decision making circuit.

AQ: Simulation on EMI

As a mathematical tool eventually, simulation can help to quickly approach the results that we need. If everything is done in right way, simulation can give us reliable conductive EMI results at the low frequency range.

Differential mode conductive EMI can be simulated with good accuracy at the low frequency range. The accuracy of common mode conductive EMI depends on the accuracy of a few parasitic parameters that need to be measured.

Personally for research, I would like to use simulation as a validation tool for calculation, and test results of prototypes can be used as proof for simulation.

E.g. for EMI filter:

1. Do the calculation for the differential mode conductive EMI filter;
2. Do the calculation for common mode conductive EMI filter base upon the parasitic parameters in the hand or estimation;
3. Use the simulation to check and validate if the calculation is right or if something is wrong and needs to be corrected;
4. Use prototype test results to check and validate if the simulation results are right.

Some other issues that caused by EMI filter can be found during system level simulation before prototyping. E.g. audio susceptibility and EMI filter damping problems.

AQ: Conditional stability

Conditional stability, I like to think about it this way:

The ultimate test of stability is knowing whether the poles of the closed loop system are in the LHP. If so, it is stable.

We get at the poles of the system by looking at the characteristic equation, 1+T(s). Unfortunately, we don’t have the math available (except in classroom exercises) we have an empirical system that may or may not be reduced to a mathematical model. For power supplies, even if they can be reduced to a model, it is approximate and just about always has significant deviations from the hardware. That is why measurements persist in this industry.

Nyquist came up with a criterion for making sure that the poles are in the LHP by drawing his diagram. When you plot the vector diagram of T(s) is must not encircle the -1 point.

Bode realized that the Nyquist diagram was not good for high gain since it plotted a linear scale of the magnitude, so he came up with his Bode plot which is what everyone uses. The Bode criteria only says that the phase must be above -180 degrees when it crosses over 0 dB. There is nothing that says it can’t do that before 0 dB.

If you draw the Nyquist diagram of a conditionally stable system, you’ll see it doesn’t surround the -1 point.

If you like, I can put some figures together. Or maybe a video would be a good topic.

All this is great of course, but it’s still puzzling to think of how a sine wave can chase itself around the loop, get amplified and inverted, phase shifted another 180 degrees, and not be unstable!

Having said all this about Nyquist, it is not something I plot in the lab. I just use it as an educational tool. In the lab, in courses, or consulting for clients, the Bode plot of gain and phase is what we use.

AQ: Paralleling IGBT modules

I’m not sure why the IGBTs would share the current since they’re paralleled, unless external circuitry (series inductance, resistance, gate resistors) forces them to do so?

I would be pretty leery of paralleling these modules. As far as the PN diodes go, reverse recovery currents in PN diodes (especially if they are hard switched to a reverse voltage) are usually not limited by their internal semiconductor operation until they reach “soft recovery” (the point where the reverse current decays). They are usually limited by external circuitry (resistance, inductance, IGBT gate resistance). A perfect example: the traditional diode reverse recovery measurement test externally limits the reversing current to a linear falling ramp by using a series inductance. If you could reverse the voltage across the diode in a nanosecond, you would see an enormous reverse current spike.

Even though diode dopings are pretty well controlled these days, carrier lifetimes are not necessarily. Since one diode might “turn off” (go into a soft reverse current decreasing ramp, where the diode actually DOES limit its own current) before the other, you may end up with all the current going through one diode for a least a little while (the motor will look like an inductor, for all intents and purposes, during the diode turn-off). Probably better to control the max diode current externally for each driver.

Paralleling IGBT modules where the IGBT but not the diode has a PTC is commonly done at higher powers. I personally have never done more than 3 x 600A modules in parallel but if you look at things like high power wind then things get very “interesting”. It is all a matter of analysis, good thermal coupling, symmetrical layout and current de-rating. Once you get too many modules in parallel then the de-rating gets out of hand without some kind of passive or active element to ensure current sharing. Then you know it is time to switch to a higher current module or a higher voltage lower current for the same power. The relative proportion of switching losses vs conduction losses also has a big part to play.