Category: Blog

AQ: Experience: Design

I tell customers that at least 50% of the design effort is the layout and routing by someone who knows what they are doing. Layer stackup is very critical for multiple layer designs. Yes, a solid design is required. But the perfect design goes down in flames with a bad layout. Rudy Severns said it best in one of his early books that you have to “think RF” when doing a layout. I have followed this philosophy for years with great success. Problems with a layout person who wants to run the auto route or doesn’t understand analog layout? No problem, you, as the design engineer, do not have to release the design until it is to your satisfaction.

I have had Schottky diodes fail because the PIV was exceeded due to circuit inductance causing just enough of a very high frequency ring (very hard to see on a scope) to exceed the PIV. Know your circuit Z’s, keep your traces short and fat.

Fixed a number of problems associated with capacitor RMS ratings on AC to DC front ends. Along with this is the peak inrush current for a bridge rectifier at turn on and, in some cases, during steady state. Unit can be turned on at the 90 deg phase angle into a capacitive load. This must be analyzed with assumptions for input resistance and/or a current inrush circuit must be added.

A satellite power supply had 70 deg phase margin on the bench, resistive load, but oscillated on the real load. Measured the loop using the AP200 on the load and the phase margin was zero. Test the power supply on the real load before going to production and then a random sampling during the life of the product.

I used MathCAD for designs until the average models came out for SMPS. Yes, the equations are nice to see and work with but they are just models none the less. I would rather have PsPice to the math while I pay attention to the models used and the overall design effort. Creating large closed form equations is wrought with pitfalls, trapdoors, and landmines. Plus, hundreds of pages of MathCAD, which I have done, is hard to sell to the customer during a design review (most attendees drift off after page 1). The PsPice schematics are more easily sold and then modified as needed with better understanding all around.

AQ: Paralleling IGBT modules

I’m not sure why the IGBTs would share the current since they’re paralleled, unless external circuitry (series inductance, resistance, gate resistors) forces them to do so?

I would be pretty leery of paralleling these modules. As far as the PN diodes go, reverse recovery currents in PN diodes (especially if they are hard switched to a reverse voltage) are usually not limited by their internal semiconductor operation until they reach “soft recovery” (the point where the reverse current decays). They are usually limited by external circuitry (resistance, inductance, IGBT gate resistance). A perfect example: the traditional diode reverse recovery measurement test externally limits the reversing current to a linear falling ramp by using a series inductance. If you could reverse the voltage across the diode in a nanosecond, you would see an enormous reverse current spike.

Even though diode dopings are pretty well controlled these days, carrier lifetimes are not necessarily. Since one diode might “turn off” (go into a soft reverse current decreasing ramp, where the diode actually DOES limit its own current) before the other, you may end up with all the current going through one diode for a least a little while (the motor will look like an inductor, for all intents and purposes, during the diode turn-off). Probably better to control the max diode current externally for each driver.

Paralleling IGBT modules where the IGBT but not the diode has a PTC is commonly done at higher powers. I personally have never done more than 3 x 600A modules in parallel but if you look at things like high power wind then things get very “interesting”. It is all a matter of analysis, good thermal coupling, symmetrical layout and current de-rating. Once you get too many modules in parallel then the de-rating gets out of hand without some kind of passive or active element to ensure current sharing. Then you know it is time to switch to a higher current module or a higher voltage lower current for the same power. The relative proportion of switching losses vs conduction losses also has a big part to play.

AQ: Is it worth to built-in batteries in electric cars

Energy storage is the issue. Can we make batteries or super caps or some other energy storage technique that will allow an electric car to have a range of 300-500 miles. Motors and drives are already very efficient, so there is not much to be gained by improving their efficiency. As far as converting the entire fleet of cars to electric, I don’t expect to see this happen any time soon. The USA has more oil than all the rest of the world put together. We probably have enough to last 1000 years. Gasoline and diesel engines work very well for automobiles and trucks and locomotives. The USA also has a huge supply of coal, which is a lot cheaper than oil. Electricity is cheaper than gasoline for two reasons: Coal is much cheaper than oil, and the coal fired power plants have an efficiency of about 50%. Gasoline engines in cars have a thermal efficiency of about 17%. Diesel locomotives have an efficiency of 50%+.

I don’t believe the interchangeable battery pack idea is workable. Who is going to own the battery packs and build the charging stations? And what happens if you get to a charging station with a nearly dead battery and there is no charged battery available?

Who is going to build the charging stations; the most logical answer is the refueling station owners as an added service. The more important question is about ownership of the batteries. If as an standard, all batteries are of same size, shape, connectors as well as Amp-Hour (or kWh) rating and a finite life time, lets say 1000 recharging. The standard batteries may have an embedded recharge counter. The electric car owners should pay the service charges plus cost of the kWh energy plus 1/1000 of the battery cost. By that, you pay for the cost of new batteries once you buy or convert to an electric car and then you pay the depreciation cost. This means you always own a new battery. The best probable owner of the batteries should be the battery suppliers or a group or union of them (like health insurance union). The charging stations collecting the depreciation cost should pass it on to the battery suppliers union. Every time a charging station get a dead battery or having its recharge counter full, they will return it to the union and get it replaced with a new one. So, as an owner of electric car you don’t need to worry about how old or new replacement battery you are getting from the charging station. You will always get a fully charged battery in exchange. The charging stations get their energy cost plus their service charges and the battery suppliers get the price of their new battery supplies.

Buddies, these are just some wild ideas and I am sure someone will come up with a better and more workable idea. And we will see most of the cars on our roads without any carbon emission.

AQ: Power supply prototypes is the best way to learn it

I have been designing power supplies for over 15 years now. We do mostly off line custom designs ranging from 50 to 500W. Often used in demanding environments such as offshore and shipping.
I think we are the lucky ones who got the chance to learn designing power supplies using the simple topologies like a flyback or a forward converter. If we wanted to make something fancy we used a push-pull or a half bridge.

Nowadays, straight out of school you get to work on a resonant converter, working with variable frequency control. Frequencies are driven up above 250kHz to make it fit in a matchbox, still delivering 100W or more. PCB layouts get almost impossible to make if you also have to think about costs and manufacturability.
Now the digital controllers are coming into fashion. These software designers know very little about power electronics and think they can solve every problem with a few lines of code.

But I still think the best way to learn is to start at the basics and do some through testing on the prototypes you make. In my department we have a standard test program to check if the prototype functions according to the specifications (Design Verification Tests), but also if all parts are used within their specifications (Engineering Verification Tests). These tests are done at the limits of input voltage range and output power. And be aware that the limit of the output power is not just maximum load, but also overload, short circuit and zero load! Start-up and stability are tested at low temperature and high temperature.

With today’s controllers the datasheets seem to get ever more limited in information, and the support you get from the FAE’s is often very disappointing. Sometime ago I even had one in the lab who sat next to me for half a day to solve a mysterious blow up of a high side driver. At the end of the day he thanked me, saying he had learned a lot!
Not the result I was hoping for.

AQ: Conditional stability

Conditional stability, I like to think about it this way:

The ultimate test of stability is knowing whether the poles of the closed loop system are in the LHP. If so, it is stable.

We get at the poles of the system by looking at the characteristic equation, 1+T(s). Unfortunately, we don’t have the math available (except in classroom exercises) we have an empirical system that may or may not be reduced to a mathematical model. For power supplies, even if they can be reduced to a model, it is approximate and just about always has significant deviations from the hardware. That is why measurements persist in this industry.

Nyquist came up with a criterion for making sure that the poles are in the LHP by drawing his diagram. When you plot the vector diagram of T(s) is must not encircle the -1 point.

Bode realized that the Nyquist diagram was not good for high gain since it plotted a linear scale of the magnitude, so he came up with his Bode plot which is what everyone uses. The Bode criteria only says that the phase must be above -180 degrees when it crosses over 0 dB. There is nothing that says it can’t do that before 0 dB.

If you draw the Nyquist diagram of a conditionally stable system, you’ll see it doesn’t surround the -1 point.

If you like, I can put some figures together. Or maybe a video would be a good topic.

All this is great of course, but it’s still puzzling to think of how a sine wave can chase itself around the loop, get amplified and inverted, phase shifted another 180 degrees, and not be unstable!

Having said all this about Nyquist, it is not something I plot in the lab. I just use it as an educational tool. In the lab, in courses, or consulting for clients, the Bode plot of gain and phase is what we use.

AQ: Friendly system without technicians diagnose

How to make our systems so friendly that they do not need technicians to help diagnose problems? Most of the more obvious answers have been well documented here but to emphasize these points it is normally the case that diagnostics and alarms can dominate the amount of code constructed for an application. That is the amount of code required to fully diagnose an application may be as much if not more than the code required to make the application work in the first place!

I have seen HMIs with diagnostic screens showing an animated version of the Cause & Effects code that allows users to see where the trip condition is. I have also seen screens depicting prestart checks, Operator guides, etc all animated to help the user. Allen-Bradley even have a program viewer that can be embedded on an HMI screen but you would probably need a technician to understand the code.

From a control system problem perspective it is inevitable that you would need to use the vendor based diagnostics to troubleshoot your system. Alarms may indicate that there is a system related problem but it is unlikely that you could build a true diagnostics application in your code to indicate the full spectrum of problems you may encounter. For example if the CPU dies or the memory fails there is nothing left to tell you what the problem is 🙂

From an application/process problem perspective if you have to resort to a technician to go into the code to determine your problem then your alarm and diagnostics code is not comprehensive enough! Remember your alarming guidelines an Operator must be able to deal with an abnormal situation within a given time frame, if the Operator has to rely on a Technician to wade through code then you really do need to perform an alarm review and build a better diagnostics system.

AQ: OPC drivers advantage

A few years back, I had a devil of time getting some OPC Modbus TCP drivers to work with Modbus RTU to TCP converts. The OPC drivers could not handle the 5 digit RTU addressing. You need to make sure your OPC driver that you try actually works with your equipment. Try before you buy is definite here. Along with some of the complications, like dropping connections due minor network cliches, a real headache and worth a topic all its own, is the ability us tag pickers and the like. The best thing to happen to I/O addressing is the use of Data Objects in the PLC and HMI/SCADA. The other advantage OPC can give you the ability to get more Quality Information on your I/O. Again, check before you buy. In my experience, the only protocol worse than Modbus in the Quality Info department is DDE and that pretty well gone. This still does not help when the Modbus slave still reports stale data like its fresh. No I/O driver can sort that out, you need a heartbeat.

A shout out to all you Equipment manufactures that putting Modbus RTU into equipment because its easy, PLEASE BUILD IN A HEATBEAT us integrators can monitor so we can be sure the data is alive and well.

Also, while you try before you buy, you want your HMI/SCADA to be able to tell the difference between, Good Read, No Read and Bad Read, particularly with a RTU network.

AQ: Self Excited Induction Generator (SEIG)

The output voltage and frequency of a self excited induction generator (SEIG) are totally dependent on the system to which it is attached.

The fact that it is self-excited means that there is no field control and therefore no voltage control, instead the residual magnetism in the rotor is used in conjunction with carefully chosen capacitors at its terminal to form a resonant condition that mutually assists the buildup of voltage limited by the saturation characteristics of the stator. Once this balance point is reached any normal load will cause the terminal voltage to drop.

The frequency is totally reliant upon the speed of the rotor, so unless there is a fixed speed or governor controlled prime mover the load will see a frequency that changes with the prime mover and drops off as the load increases.

The above characteristics are what make SEIGs less than desirable for isolated/standalone operation IF steady well regulated AC power is required. On the other hand if the output is going to be rectified into DC then it can be used. Many of these undesirable “features” go away if the generator is attached to the grid which supplies steady voltage and frequency signals.

The way around all the disadvantages is to use a doubly fed induction generator (DFIG). In addition to the stator connection to the load, the wound rotor is provided with a varying AC field whose frequency is tightly controlled through smart electronics so that a relatively fixed controllable output voltage and frequency can be achieved despite the varying speed of the prime mover and the load, however the costs for the wound rotor induction motor plus the sophisticated control/power electronics are much higher than other forms of variable speed/voltage generation.

AQ: How/where do we as engineers need to change?

System Design – A well designed system should provide clear and concise system status indications. Back in the 70’s (yes, I am that old), Alarm and indicator panels provided this information in the control room. Device level indicators further guided the technician to solving the problem. Today, these functions are implemented in a control room and machine HMI interface. Through the use of input sensor and output actuator feedback, correct system operation can be verified on every scan.

Program (software) Design – It has been estimated that a well written program is 40% algorithm and 60% error checking and parameter verification. “Ladder” in not an issue. Process and machine control systems today are programmed in ladder, structured text, function block, etc. The control program is typically considered intellectual property (IP) and in many cases “hidden” from view. This makes digging through the code impractical.

How/where do we as engineers need to change? – The industry as a whole needs to enforce better system design and performance. This initiative will come from the clients, and implemented by the developers. The cost/benefit trade-off will always be present. Developers trying to improve their margins (reduce cost – raise price) and customers raising functionality and willing to pay less. “We as engineers” are caught in the middle, trying to find better ways to achieve the seemingly impossible.

AQ: High voltage power delivery

You already know from your engineering that higher voltages results to less operational losses for the same amount of power delivered. The bulk capacity of 3000MW has a great influence on the investment costs obviously, that determines the voltage level and the required number of parallel circuit. The need for higher voltage DC levels has become more feasible for bulk power projects (such as this one) especially when the transmission line is more than 1000 km long. So on the economics, investment for 800kV DC systems have been much lower since the 90’s. Aside from reduction of overall project costs, HVDC transmission lines at higher voltage levels require lesser right-of-way. Since you will be also requiring less towers as will see below, then you will also reduce the duration of the project (at least on the line).

Why DC not AC? From a technical point of view, there are no special obstacles against higher DC voltages. Maintaining stable transmission could be difficult over long AC transmission lines. The thermal loading capability is usually not decisive for long AC transmission lines due to limitations in the reactive power consumption. The power transmission capacity of HVDC lines is mainly limited by the maximum allowable conductor temperature in normal operation. However, the converter station cost is expensive and will offset the gain in reduced cost of the transmission line. Thus a short line is cheaper with ac transmission, while a longer line is cheaper with dc.
One criterion to be considered is the insulation performance which is determined by the overvoltage levels, the air clearances, the environmental conditions and the selection of insulators. The requirements on the insulation performance affect mainly the investment costs for the towers.

For the line insulation, air clearance requirements are more critical with EHVAC due to the nonlinear behavior of the switching overvoltage withstand. The air clearance requirement is a very important factor for the mechanical design of the tower. The mechanical load on the tower is considerably lower with HVDC due to less number of sub-conductors required to fulfill the corona noise limits. Corona rings will be always significantly smaller for DC than for AC due to the lack of capacitive voltage grading of DC insulators.

With EHVAC, the switching overvoltage level is the decisive parameter. Typical required air clearances at different system voltages for a range of switching overvoltage levels between 1.8 and 2.6 p.u. of the phase-to-ground peak voltage. With HVDC, the switching overvoltages are lower, in the range 1.6 to 1.8 p.u., and the air clearance is often determined by the required lightning performance of the line.