Category: Blog

AQ: Caution is the key to success in power converters

I work across the scale of power electronics in voltages and currents. From switchers of 1W for powering ICs to 3kW telco power supplies up to multi-megawatt power converters for reactive power control in AC transmission networks and into power converters for high voltage transmission.

There is a difference in how you can work on these different scale converters. This difference is down to how much the prototype you are destroying costs, how long it takes to rebuild it and how easily it will kill you. When you spend more than 2 million on the prototype parts then you do not ever blow it up. If the high voltage on your converter is 15kV or more then there is no way to probe it with an oscilloscope directly and no possibility to be anywhere near that voltage without being hurt. So the level of care at these bigger power levels is higher and the consequence of a mistake is so high that the process needs to be much more detailed and controlled mostly for safety’s sake. We find that our big power converter processes really help when working on smaller converters. The processes include sign offs for safety, designed and prescribed safety and earthing systems for each converter, no scope probes put on and off live parts and working in pairs at all times with agreed planned actions. Pair working is one thing that may save you in the event of an electric shock. These processes seem very slow and cumbersome to engineers who work on low voltage (<1000V) but they are very useful even at low voltage.

Having said all that, experienced cautious engineers prevent converter blow ups. Add just a little bit of process and success can go up significantly. I think that an analysis of Dr Ridley’s failure list will point to actions that will improve success.

As my boss at one of those really large converter companies used to say “Stamp out converter fires”.

AQ: Switching frequency selection

Switching frequency selection is actually a tradeoff, and follows the below guidelines:

  1. Lower frequency (Eg 30kHz) means bulkier magnetics and capacitors; Higher frequency (Eg 1Mhz)) means smaller parts, hence more compact PSU.
  2. Stay away from exact 150kHz as this is the low end of any EMI compliance; So, if your frequency happens to be exactly 150kHz, then your PSU will be a strong emitter; For many commercial low cost PSUs, 100 KHz has been used for many years, which is why many inductors and capacitors are specified at 100kHz.
  3. Higher frequency >/= 1MHz converters provide for better transient response. Obviously, the control IC should be capable of supporting. There are plenty of resonant converters available.
  4. Higher frequency results in higher switching losses; To control that, you will need faster switching FETs, Diodes, capacitors, magnetics and control ICs.
  5. Higher frequency MAY result in more broadband noise; its not always true, since noise can be controlled by good PCB layout and good magnetics designs.

Board power DC/DC converters are commonly built using 1MHz switchers.
Chassis power Telecom/Server PSUs seem to stay with 100-300KHz range.

Manufacturers are able to achieve exceptional density by virtue of High frequency resonant topologies, but they have to achieve high efficiencies too; Else, they will generate so much heat that they cannot meet UL/IEC safety requirements.
In some cases, they will leave the thermal problem to the user.  Usually, the first few paragraphs of any reference design discusses the tradeoffs.

AQ: How to get confidence while powering ON an SMPS prototype?

I never just put power to a first prototype and see what happens. Smoke and loud sounds are the most likely result and then you just know that something was not perfect. So how would you test the next prototype sample?

A good idea is to put supply voltage to your control circuit from an external supply first – often something like 12V. Check oscillator waveform, frequency, gate pulses etc. If possible, use another external power supply to put a voltage to your output. Increasing this voltage slowly, you should see the gate pulses go from max. to min. duty cycle when passing the desired output voltage. If this does not happen, check your feedback path, still without turning main power on.

If everything looks as expected, remove the external supply from the output but keep the control circuit powered from an external source. Then SLOWLY turn up the main input voltage while using your oscilloscope to monitor the voltage waveforms in the power circuit and a DC voltmeter to monitor output voltage etc. Keep an eye on the ampere-meter on the main power source. If something suspicious occurs, stop increasing input further and investigate what’s happening while the circuit is still alive.

With a low load you should normally expect the output voltage to hit the desired value soon, at least in a flyback converter. Check that this happens. Then check what happens with a variable load – preferably electronic.

If you did not calculate your feedback loop, very likely you will see self oscillation (normally not destructive). If you don’t, use the step load function in your electronic load to check stability. If you see a clear ringing after a load step, you still have some work to do in your loop. But feedback and stability is another huge area which Mr. Ridley has taught us a lot about.

And yes – the world needs powerful POWER ENGINEERS desperately!

AQ: Hysteretic controller

We can see that the hysteretic controller is a special case of other control techniques. For example, “sliding mode control” usually uses two state variables to determine one switching variable (switch ON or OFF). So the hysteretic controller is a special case of “1-dimensional” sliding mode. In general, there are many techniques under the name of “geometric control” that can be used to prove the stability of a general N-state system under a given switching rule. So I believe that you can apply some of these techniques to prove the stability of the hysteretic controller, although I have not tried to do this myself. The book “elements of power electronics” by Krein discusses that in chapter 17.

But I can talk more about one technique that I have used and in my opinion is the most general and elegant technique for non-linear systems. It is based on Lyapunov stability theory. You can use this technique to determine a switching rule to a general circuit with an arbitrary number of switches and state variables. It can be applied to the simple case of the hysteretic controller (i.e. 1 state variable, 1 switching variable) to verify if the system is stable and what are the conditions for stability. I have done this and verified that it is possible to prove the stability of hysteretic controllers, imposing very weak constraints (and, of course, no linearization needed). In a nutshell, to prove the system stable, you have to find a Lyapunov function for it.

What can expand is to go beyond a simple window comparator for hysteretic control.

#1) control bands, or switching limits can be variable and also part of a loop, especially if one wants to guarantee a nearly fixed frequency.

#2) using a latch or double latch after the comparator(s), one can define (remember) the state and define operations such as incorporating fixed Ton or Toff periods for additional time control… this permits the “voltage boost” scenario you previously said could not be done. This also prevents common “chaos” operation and noise susceptibility that others experience with simpler circuits.

#3) additional logic can assure multiphase topologies locked to a system clock and compete very well with typical POL buck regulators for high-end processors that require high di/dt response.

Time or state domain control systems such as this, can have great advantages over typical topologies. There really is no faster control method that provides a quicker load response without complete predictive processing, yet that can also be applied to hysteretic control.

AQ: Experience: Power Supply

My first big one: I had just joined a large corporation’s central R and D in Mumbai (my first job) and I was dying to prove to them that they were really very wise (for hiring me). I set up my first AC-DC power supply for the first few weeks. Then one afternoon I powered it up. After a few minutes as I stared intently at it, there was a thunderous explosion…I was almost knocked over backwards in my chair. When I came to my senses I discovered that the can of the large high-voltage bulk cap had just exploded (those days 1000uF/400V caps were real big)…the bare metal can had taken off like a projectile and hit me thump on the chest through my shirt (yet it was very red at that spot even till hours later). A shower of cellulose and some drippy stuff was all over my hair and face. Plus a small crowd of gawking engineers when I came to. Plus a terribly bruised ego in case you didn’t notice. Now this is not just a picturesque story. There is a reason why they now have safety vents in Aluminum Caps (on the underside too), and why they ask you never never to even accidentally apply reverse polarity, especially to a high-voltage Al cap. Keep in mind that an Al Elko is certainly damaged by reverse voltage or overvoltage, but the failure mechanism is simply excessive heat generation in both cases. Philips components, in older datasheets, used to actually specify that their Al Elkos could tolerate an overvoltage of 40% for maybe a second I think, with no long-term damage. And people often wonder why I only use 63V Al Elkos as the bulk cap in PoE applications (for the PD). They suggest 100V, and warn me about surges and so on. But I still think 63V is OK here, besides being cheap, and I tend to shun overdesign. In fact I think even ceramic caps can typically handle at least 40% overvoltage by design and test — and almost forever with no long term effects. Maybe wrong here though. Double check that please.

Another historic explosion I heard about after I had left an old power supply company. I deny any credit for this though. My old tech, I heard, in my absence, was trying to document the stresses in the 800W power supply which I had built and left behind. The front-end was a PFC with four or five paralleled PFC FETs. I had carefully put in ballasting resistors in the source and gates of each Fet separately, also diligently symmetrical PCB traces from lower node of each sense resistor to ground (two sided PCB, no ground plane). This was done to ensure no parasitic resonances and good dynamic current sharing too. There was a method to my madness it turns out. All that the tech did was, when asked to document the current in the PFC Fets, placed a small loop of wire in series with the source of one of these paralleled Fets. That started a spectacular fireworks display which I heard lasted over 30 seconds (what no fuse???), with each part of the power supply going up in flames almost sequentially in domino effect, with a small crowd staring in silence along with the completely startled but unscathed tech (lucky guy). After that he certainly never forgot this key lesson: never attempt to measure FET current by putting a current probe in its source— put it on the drain side. It was that simple. The same unit never exploded after that, just to complete the story.

AQ: Automation engineering

Automation generally involves taking a manufacturing, processing, or mining process that was previously done with human labor and creating equipment/machinery that does it without human labor. Often, in automation, engineers will use a PLC or DCS with standard I/O, valves, VFDs, RTDs, etc to accomplish this task. Control engineering falls under the same umbrella in that you are automating a process such as controlling the focus on a camera or maintaining the speed of a car with a gas pedal, but often you are designing something like the autofocus on a camera or cruise control on an automobile and oftentimes have to design the controls using FPGA’s or circuits and components completely fabricated by the engineering team’s own design.

When I first started, I started in the DCS side. Many of the large continuous process industries only let chemical engineers like myself anywhere near the DCS. EE landed the instruments and were done. It was all about you had to be process engineer before your became a controls engineer. In the PLC world it was the opposite, the EE dominated. Now it doesn’t line up along such sharp lines anymore. But there are lots people doing control/automation work that are clueless when comes to understanding process. When this happens it is crucial they are given firm oversight by someone who does.

On operators, I always tell young budding engineers to learn to talk to operators with a little advice, do not discount their observations because their analysis as to the cause is unbelievable, their observations are generally spot on. For someone designing a control system, they must be able to think like an operator and understand how operators behave and anticipate how they will use the control system. This is key to a successful project. If the operators do not like or understand the control system, they will kill a project. This is different than understanding how a process works which is also important.

AQ: Transformer uprating

I once uprated a set of 3x 500KVA 11/.433kv ONAN transformers to 800KVA simply by fitting bigger radiators. This was with the manufacturers blessing. (not hermetically sealed – there were significant logistical difficulties in changing the transformers, so this was an easy option). Limiting factor was not the cooling but the magnetic saturation of the core at the higher rating. All the comments about uprating the associated equipment are relevant, particularly on the LV side. Increase in HV amps is minimal. Pragmatically, if you can keep the top oil temperature down you will survive for at least a few years. Best practice of course is to change the transformer!

It is true that you can overload your transformer say 125 %, 150 % or even greater on a certain length of time but every instance of that overloading condition reflects a degradation on the life of your transformer winding insulation. Overload your transformer and you also shorten the life of your winding insulation. The oil temperature indicated on the temperature gauge of the transformer is much lower than the hotspot temperature of the transformer winding which is a critical issue when considering the life of the winding insulation. Transformers having rating of 300 KVA most probably do not even have temperature indicating gauge. The main concern is how effectively can you lower the hotspot temperature in order that it does not significantly take away some of the useful life of your transformer winding insulation.

AQ: Induction machines testing

Case: We got by testing 3 different machines under no-load condition.
The 50 HP and 3 HP are the ones which behave abnormally when we apply 10% overvoltage. The third machine (7.5 HP) is a machine that reacts normally under the same condition.
What we mean by abnormal behavior is the input power of the machine that will increase dramatically under only 10% overvoltage which is not the case with most of the induction machines. This can be seen by the numbers given below.

50 HP, 575V
Under 10% overvoltage:
Friction & Windage Losses increase 0.2%
Core loss increases 102%
Stator Copper Loss increases 107%

3 HP, 208V
Under 10% overvoltage:
Friction & Windage Losses increase 8%
Core loss increases 34%
Stator Copper Loss increases 63%

7.5 HP, 460V
Under 10% overvoltage:
Friction & Windage Losses decrease 1%
Core loss increases 22%
Stator Copper Loss increases 31%

Till now, we couldn’t diagnose the exact reason that pushes those two machines to behave in such way.
Answer: A few other things I have not seen (yet) include the following:
1) Are the measurements of voltage and current being made by “true RMS” devices or not?
2) Actual measurements for both current and voltage should be taken simultaneously (with a “true RMS” device) for all phases.
3) Measurements of voltage and current should be taken at the motor terminals, not at the drive output.
4) Measurement of output waveform frequency (for each phase), and actual rotational speed of the motor shaft.

These should all be done at each point on the curve.

The reason for looking at the phase relationships of voltage and current is to ensure the incoming power is balanced. Even a small voltage imbalance (say, 3 percent) may result in a significant current imbalance (often 10 percent or more). This unbalanced supply will lead to increased (or at least unexpected) losses, even at relatively light loads. Also – the unbalance is more obvious at lightly loaded conditions.

As noted above, friction and windage losses are speed dependent: the “approximate” relationship is against square of speed.

Things to note about how the machine should perform under normal circumstances:
1. The flux densities in the magnetic circuit are going to increase proportionally with the voltage. This means +10% volts means +10% flux. However, the magnetizing current requirement varies more like the square of the voltage (+10% volt >> +18-20% mag amps).
2. Stator core loss is proportional to the square of the voltage (+10% V >> +20-25% kW).
3. Stator copper loss is proportional to the square of the current (+10% V >> +40-50% kW).
4. Rotor copper loss is independent of voltage change (+10% V >> +0 kW).
5. Assuming speed remains constant, friction and windage are unaffected (+10% V >> +0 kW). Note that with a change of 10% volts, it is highly likely that the speed WILL actually change!
6. Stator eddy loss is proportional to square of voltage (+10% V >> +20-25% kW). Note that stator eddy loss is often included as part of the “stray” calculation under IEEE 112. The other portions of the “stray” value are relatively independent of voltage.

Looking at your test results it would appear that the 50 HP machine is:
a) very highly saturated
b) has damaged/shorted laminations
c) has a different grade of electrical steel (compared to the other ratings)
d) has damaged stator windings (possibly from operation on the drive, particularly if it has a very high dv/dt and/or high common-mode voltage characteristic)
e) a combination of any/all of the above.

One last question – are all the machines rated for the same operating speed (measured in RPM

AQ: Industrial automation process

My statement “the time it takes to start or stop a process is immaterial’ is somewhat out of context. The complete thought is” the time it takes to start or stop a process is immaterial to the categorization of that process into either the continuous type or the discrete type” which is how this whole discussion got started.

I have the entirely opposite view of automation. “A fundamental practice when designing a process is to identify bottlenecks in order to avoid unplanned shutdowns”.

Don’t forget that the analysis should include the automatic control system. This word of advice is pertinent to whichever “camp” you chose to join.

Just as you have recognized the strong analogies and similarities between “controlling health care systems” and “controlling industrial systems”, there are strong analogies between so-called dissimilar industries as well between the camp which calls itself “discrete” and the camp which waves the “continuous” flag.

You may concern about the time it takes to evaluate changes in parameter settings for your cement kiln is a topic involving economic risks which could include discussions of how mitigate these risks, such as methods of modeling the virtual process for testing and evaluation rather than playing with a real world process. This is applicable to both “camps”.

The same challenge of starting up/shutting down your cement kiln is the same challenge of starting up/shutting down a silicon crystal reactor or wafer processing line in the semiconductor industry. The time scales may be different, but the economic risks may be the same — if not more — for the electronics industry.

I am continuously amazed at how I can borrow methods from one industry and apply them to another. For example, I had a project controlling a conveyor belt at a coal mine which was 2.5 miles long – several millions of pounds of belting, not to mention the coal itself! The techniques I developed for tracking the inventory of coal on this belt laid the basis for the techniques I used to track the leading and trailing edge of bread dough on a conveyor belt 4 feet long. We used four huge 5KV motors and VFDs at the coal mine compared to a single 0.75 HP 480 VAC VFD at the bakery, and startups/shutdowns were order of magnitudes different, but the time frame was immaterial to what the controls had to do and the techniques I applied to do the job.

I once believed that I needed to be in a particular industry in order to feel satisfied in my career. What I found out is that I have a passion for automation which transcends the particular industry I am in at the moment and this has led to a greater appreciation of the various industrial cultures which exist and greater enjoyment practicing my craft.

So these debates about discrete vs. continuous don’t affect me in the least. My concern is that the debates may impair other more impressionable engineers from realizing a more fulfilling career by causing them to embrace one artificial camp over the other. Therefore, my only goal of engaging in this debate is to challenge any effort at erecting artificial walls which unnecessarily drive a damaging wedge between us.

AQ: Simulation interpretation in automation industry

Related to “automation industry”, there are generally 3 different interpretations of what simulations is:
1) Mechanical Simulations – Via various solid modeling tools and cad programs; tooling, moving mechanisms, end-effectors… are designed with 3D visualizations, connecting the modules to prevent interference, check mass before actual machining…
2) Electronics Simulations – This type of simulations are either related to the manufacturers of “specific instrumentations” used in automation industry (ultrasonic welders, laser marking systems,…) or the designers of circuit boards.
3) Electrical & Controls Simulations.
A) Electrical Schematics, from main AC disconnect switch, down to 24VDC low amps for I/O interface.
Simulation tools allow easy determinations of system’s required amperage, fuse sizes, wire gauges, accordance with standards (CE, UL, cUL, TUV…)…
B) Logic Simulations, HMI interface, I/O exchange, motion controls…
a) If you want to have any kind of meaningful simulations, get in the habit of “modular ladder logic” circuit design. This means, don’t design your ladder like one continuous huge program that runs the whole thing; simulating this type of programs is almost impossible in every case. Break down the logic to sub-systems or maybe even down to stand alone mechanisms (pick & place, motor starter…), simulating and troubleshooting this scenario is fairly easy.
b) When possible, beside automated run mode of the machine or system, build “manual mode logic” for it as well. Then via physical push-buttons or HMI, you should have “step forward” & “step back” for every “physical movement or action”.

Simulating the integrity of the “ladder logic program” and all the components and interfaces will be a breeze if things are done meticulously upfront.