Category: Blog

AQ: Impedance analyzer

A graphical impedance analyzer with good phase resolution is a must. Some brands have all the bells and whistles, but not the phase resolution necessary to accurately measure high Q (100+) components over the instrument’s full frequency range (which should extend at least into the low megahertz). Of course the Agilent 4294A fills the performance bill, but with a $40k+ purchase bill, it also empties the budget (like similar high end new models from Wayne Kerr). Used models from Wayne Kerr work very well, and can be had for under $10K but they are very heavy and clunky with very ugly (but still useable) displays.

Perhaps the best value may be the Hioki IM3570, which works extremely well with superior phase resolution, has a very nice color touch screen display (with all the expected engineering graphing formats), is compact and lightweight, and costs around $10k new. Its only downside is that its fan is annoyingly loud and does not reduce its noise output during instrument idle.

But where should an impedance analyzer rank on the power electronics design engineer’s basic equipment list (and why)?

Beyond the basic lower cost necessities such as DMMs, bench power supplies, test leads, soldering stations, etcetera, I would rank a good impedance analyzer second only to a good oscilloscope. The impedance analyzer allows one to see all of a component’s secondary impedance characteristics and to directly compare similar components. Often overlooked is the information such an instrument can provide by examining component assemblies in situ in a circuit board assembly. Sometimes this can be very revealing of hidden, but influential layout parasitics.

Equally importantly, an impedance analyzer allows accurate SPICE models to be quickly formulated so that simulation can be used as a meaningful design tool. Transformer magnetizing and leakage inductances can be measured as well as inter-winding capacitance and frequency dependent resistive losses. From these measurements and with proper technique, a model can be formulated that nearly exactly matches the real part. Not only does this allow power circuits and control loops to be initially designed entirely by simulation (under the judicious eye of experience, of course), but it even allows one to effectively simulate the low frequency end of a design’s EMI performance.

AQ: Maximum permissible value of grounding resistance

For grounding in the US it typically goes like this: Utility transformer has one ground rod. Then from the utility to the building you typically have three phase conductors and one neutral/ground conductor landing on the main panel with the utility meter. At that point we drive a ground rod. And we bond the ground rod to the water pipes (generally). And we bond the ground rod to the building steel (generally). Water pipes are generally very well connected to ground and the building steel is a nice user ground. With all these connections you typically have a good ground reference. Now, if that utility neutral wire is bad or too small, then you can have poor reference to ground between phases (a normal sign of that is flickering lights even when the load is not changing much).

Grounding impedance of the transformer and building ground rods is mainly for voltage stabilization and under normal conditions should have nothing to do with our return ground fault current. See NEC 250.1 (5) “The earth shall not be considered as an effective ground-fault current path.”

Let’s say we have a system with the building transformer and panel to ground impedance of 1000 ohms (we built this place on solid rock). Okay, we have a poor 277V reference and we will have flickering lights (that 277 voltage will bounce all over the place). But now, in our system above, if we take a phase wire and connect it to a motor shell, which is also connected to our grounding wire, will the upstream breaker trip? The answer is yes. If our phase-to-ground fault impedance is low we will trip the upstream feeder breaker no matter what the main panel ground rod impedance is. My point here is that is does not matter what our transformer grounding is or what our panel grounding is (ground rod is not important in this case). The breaker must trip because our circuit is complete between the phase conductor and the transformer wye leg.

As long as we have a utility main transformer to panel neutral conductor of proper size to handle our fault current and we size our grounding conductors properly and they are properly connected at each subpanel and each motor in our case, we will apply nearly full phase to ground voltage because our real ground fault path is from that motor, through the grounding conductor, through our sub panels, to our main panel, than back to the transformer. That ground current must flow through our building grounding conductor to the main panel and back to the transformer through that utility neutral wire which is connected to the wye leg of the transformer. And it does not matter what the transformer to ground rod connection is. We could take that out the transformer to ground rod connection and the main panel to ground rod connection completely and we are still connecting that phase wire, through the motor metal to the grounding conductor back to the wye leg of that utility transformer, which will complete our electrical circuit. Current will flow and the breaker will trip.

AQ: Experience: Design

I tell customers that at least 50% of the design effort is the layout and routing by someone who knows what they are doing. Layer stackup is very critical for multiple layer designs. Yes, a solid design is required. But the perfect design goes down in flames with a bad layout. Rudy Severns said it best in one of his early books that you have to “think RF” when doing a layout. I have followed this philosophy for years with great success. Problems with a layout person who wants to run the auto route or doesn’t understand analog layout? No problem, you, as the design engineer, do not have to release the design until it is to your satisfaction.

I have had Schottky diodes fail because the PIV was exceeded due to circuit inductance causing just enough of a very high frequency ring (very hard to see on a scope) to exceed the PIV. Know your circuit Z’s, keep your traces short and fat.

Fixed a number of problems associated with capacitor RMS ratings on AC to DC front ends. Along with this is the peak inrush current for a bridge rectifier at turn on and, in some cases, during steady state. Unit can be turned on at the 90 deg phase angle into a capacitive load. This must be analyzed with assumptions for input resistance and/or a current inrush circuit must be added.

A satellite power supply had 70 deg phase margin on the bench, resistive load, but oscillated on the real load. Measured the loop using the AP200 on the load and the phase margin was zero. Test the power supply on the real load before going to production and then a random sampling during the life of the product.

I used MathCAD for designs until the average models came out for SMPS. Yes, the equations are nice to see and work with but they are just models none the less. I would rather have PsPice to the math while I pay attention to the models used and the overall design effort. Creating large closed form equations is wrought with pitfalls, trapdoors, and landmines. Plus, hundreds of pages of MathCAD, which I have done, is hard to sell to the customer during a design review (most attendees drift off after page 1). The PsPice schematics are more easily sold and then modified as needed with better understanding all around.

AQ: Paralleling IGBT modules

I’m not sure why the IGBTs would share the current since they’re paralleled, unless external circuitry (series inductance, resistance, gate resistors) forces them to do so?

I would be pretty leery of paralleling these modules. As far as the PN diodes go, reverse recovery currents in PN diodes (especially if they are hard switched to a reverse voltage) are usually not limited by their internal semiconductor operation until they reach “soft recovery” (the point where the reverse current decays). They are usually limited by external circuitry (resistance, inductance, IGBT gate resistance). A perfect example: the traditional diode reverse recovery measurement test externally limits the reversing current to a linear falling ramp by using a series inductance. If you could reverse the voltage across the diode in a nanosecond, you would see an enormous reverse current spike.

Even though diode dopings are pretty well controlled these days, carrier lifetimes are not necessarily. Since one diode might “turn off” (go into a soft reverse current decreasing ramp, where the diode actually DOES limit its own current) before the other, you may end up with all the current going through one diode for a least a little while (the motor will look like an inductor, for all intents and purposes, during the diode turn-off). Probably better to control the max diode current externally for each driver.

Paralleling IGBT modules where the IGBT but not the diode has a PTC is commonly done at higher powers. I personally have never done more than 3 x 600A modules in parallel but if you look at things like high power wind then things get very “interesting”. It is all a matter of analysis, good thermal coupling, symmetrical layout and current de-rating. Once you get too many modules in parallel then the de-rating gets out of hand without some kind of passive or active element to ensure current sharing. Then you know it is time to switch to a higher current module or a higher voltage lower current for the same power. The relative proportion of switching losses vs conduction losses also has a big part to play.

AQ: Is it worth to built-in batteries in electric cars

Energy storage is the issue. Can we make batteries or super caps or some other energy storage technique that will allow an electric car to have a range of 300-500 miles. Motors and drives are already very efficient, so there is not much to be gained by improving their efficiency. As far as converting the entire fleet of cars to electric, I don’t expect to see this happen any time soon. The USA has more oil than all the rest of the world put together. We probably have enough to last 1000 years. Gasoline and diesel engines work very well for automobiles and trucks and locomotives. The USA also has a huge supply of coal, which is a lot cheaper than oil. Electricity is cheaper than gasoline for two reasons: Coal is much cheaper than oil, and the coal fired power plants have an efficiency of about 50%. Gasoline engines in cars have a thermal efficiency of about 17%. Diesel locomotives have an efficiency of 50%+.

I don’t believe the interchangeable battery pack idea is workable. Who is going to own the battery packs and build the charging stations? And what happens if you get to a charging station with a nearly dead battery and there is no charged battery available?

Who is going to build the charging stations; the most logical answer is the refueling station owners as an added service. The more important question is about ownership of the batteries. If as an standard, all batteries are of same size, shape, connectors as well as Amp-Hour (or kWh) rating and a finite life time, lets say 1000 recharging. The standard batteries may have an embedded recharge counter. The electric car owners should pay the service charges plus cost of the kWh energy plus 1/1000 of the battery cost. By that, you pay for the cost of new batteries once you buy or convert to an electric car and then you pay the depreciation cost. This means you always own a new battery. The best probable owner of the batteries should be the battery suppliers or a group or union of them (like health insurance union). The charging stations collecting the depreciation cost should pass it on to the battery suppliers union. Every time a charging station get a dead battery or having its recharge counter full, they will return it to the union and get it replaced with a new one. So, as an owner of electric car you don’t need to worry about how old or new replacement battery you are getting from the charging station. You will always get a fully charged battery in exchange. The charging stations get their energy cost plus their service charges and the battery suppliers get the price of their new battery supplies.

Buddies, these are just some wild ideas and I am sure someone will come up with a better and more workable idea. And we will see most of the cars on our roads without any carbon emission.

AQ: Home automation concept

The concept of home automation on a global scale is a good concept. How to implement such a technology on a global scale is an interesting problem, or I should say issues to be resolved. Before global approval can be accomplished the product of home automation may need a strategy that starts with a look at companies that have succeeded in getting global approval of their products.

If we look at what companies that have the most products distributed around the world we see that Intel is one of these companies. What’s interesting is that this company has used automation in their Fabs for decades. This automation has allowed them to produce their products faster and cheaper than the rest of the industry. The company continues to invest in automation and the ability to evolve with technology and management. We have many companies that compete on the world stage; I don’t think many of these companies distribute as much product. So to compete at a level to make home automation accepted and to accomplish global acceptance the industry and the factories have to evolve to compete. That mission by the automation can be accomplished by adapting a strategy that updates their automation in their factories, stop using products that were used and developed in the 1970s (another way of saying COTS) and progress to current and new systems. A ten years old Factory may be considered obsolete if the equipment inside is as old as the factory.

Now for cost, when I thank of PLC or commercial controllers I see a COTS product that may be using obsolete parts that are not in production any more or old boards. So I see higher cost for manufacturing, a reduction in reliability. Now many procurement people evaluate risk in such a way that may rate older boards lower in risk for the short term, not a good evaluation for the long term. The cost is a function of how much product can be produced at the lowest cost and how efficient and competitive the company that produces the product. So time is money. The responsibility for cost is the company and the ability to produce a competitive product, not the government.

Now into control systems and safety, if the automation system is used in the house safety has to be a major consideration. I know at Intel Fabs if you violate any safety rule you won’t be working at that company long. To address safety the product must conform to the appropriate standards. Safety should be a selling point for home automation. Automation engineers should get and remember safety is one of the main considerations for an engineer. If someone gets hurt or killed because of a safety issue the first person looked at is the engineer.

Now 30% energy saving in my book is not enough, 35 to 40 percent should be a goal. Now solar cells have improved but the most efficient in the south west US. The Sterling engines are 1960 designs and use rare gases such as helium which may not be a renewable resource, Wind generators need space and are electromechanical so reliability and maintenance needs improving.

Now on to the interface standards, most modern factories that produce processors use the Generic equipment Manufacture standard, good deal works. As far as what and when to uses a standard interface, on BOX produced by one company may use RE-422 where another company may use RS 485 so the system engineer should resolve these issues before detailed design starts. Check with IEEE. Or you may be able to find the spec at every spec.com this is a good place to look for some of the specs needed.

So I conclude, many issues exist, and when broken down home automation is viable and needs a concerted effort and commitment from at least the companies and management that produce products for automation and a different model for manufacturing and growing the home systems.
Home automation with a focus on energy savings as a goal is a good thing. We have a lot of work to ma

AQ: Signal processing and communications theory

Coming from a signal processing and communications theory background, but with some experience in power design, I can’t resist the urge to chime in with a few remarks.

There are many engineering methods to deal with sources of interference, including noise from switching converters, and spread spectrum techniques are simply one more tool that may be applied to achieve a desired level of performance.

Spread spectrum techniques will indeed allow a quasi-peak EMC test to be passed when it might otherwise be failed. Is this an appropriate application for this technique?

The quasi-peak detector was developed with the intention to provide a benchmark for determining the psycho-acoustic “annoyance” of an interference on analog communications systems (more specifically, predominantly narrow band AM type communication systems). Spread spectrum techniques resulting in a reduced QP detector reading will almost undoubtedly reduce the annoyance the interference would have otherwise presented to the listener. Thus the intent was to reduce the degree of objectionable interference and the application of spread spectrum meets that goal. This doesn’t seem at all like “cheating” to me; the proper intent of the regulatory limit is still being met.

On the other hand, as earlier posters have pointed out, the application of spectrum spreading does nothing to reduce the total power of the interference but simply spreads it over a wider bandwidth. Spreading the noise over a wider bandwidth provides two potential benefits. The most obvious benefit occurs if the victim of the interference is inherently narrowband. Spreading the spectrum of the interference beyond the victim bandwidth provides an inherent improvement in signal to noise ratio. A second, perhaps less obvious, benefit is that the interference becomes more noise like in its statistics. Noise like interference is less objectionable to the human ear than impulsive noise but it should also be recognized that it is less objectionable to many digital transmission systems too.

However, from an information theoretic perspective the nature of the interference doesn’t matter, but rather only the signal to noise ratio matters. Many modern communication systems employ wide bandwidths. Furthermore they employ powerful adaptive modulation and coding schemes that will effectively de-correlate interference sources (makes the effect noise like); these receivers don’t care whether the interference is narrow band or wide band in terms of bit error rate (BER) and they will be effected largely the same by a given amount of interference power (in theory identically the same, but implementation limitations still provide some gap to the theoretical limits).

It is worth noting however that while spectrum spreading techniques do not reduce the interference power they don’t make it any worse either. Thus these techniques may (I would argue legitimately as per above) help with passing a test which specified the CISPR Quasi-Peak detector and should not make the performance on a test specifying the newer CISPR RMS+Average test any worse.

It should always be an engineering goal to keep interference to a reasonable minimum and I would agree that it is aesthetically most satisfying (and often cheapest and most simple) to achieve this objective by somehow reducing the interference at source (this is a wide definition covering aspects of SMPS design from topology selection to PCB layout and beyond). However, the objective to control noise at the source shouldn’t eliminate alternative methods from consideration in any given application.

There will always be the question of how good is good enough and it is the job of various regulatory bodies to define these requirements and to do so robustly enough such that the compliance tests can’t be “gamed”.

AQ: Friendly system without technicians diagnose

How to make our systems so friendly that they do not need technicians to help diagnose problems? Most of the more obvious answers have been well documented here but to emphasize these points it is normally the case that diagnostics and alarms can dominate the amount of code constructed for an application. That is the amount of code required to fully diagnose an application may be as much if not more than the code required to make the application work in the first place!

I have seen HMIs with diagnostic screens showing an animated version of the Cause & Effects code that allows users to see where the trip condition is. I have also seen screens depicting prestart checks, Operator guides, etc all animated to help the user. Allen-Bradley even have a program viewer that can be embedded on an HMI screen but you would probably need a technician to understand the code.

From a control system problem perspective it is inevitable that you would need to use the vendor based diagnostics to troubleshoot your system. Alarms may indicate that there is a system related problem but it is unlikely that you could build a true diagnostics application in your code to indicate the full spectrum of problems you may encounter. For example if the CPU dies or the memory fails there is nothing left to tell you what the problem is 🙂

From an application/process problem perspective if you have to resort to a technician to go into the code to determine your problem then your alarm and diagnostics code is not comprehensive enough! Remember your alarming guidelines an Operator must be able to deal with an abnormal situation within a given time frame, if the Operator has to rely on a Technician to wade through code then you really do need to perform an alarm review and build a better diagnostics system.

AQ: OPC drivers advantage

A few years back, I had a devil of time getting some OPC Modbus TCP drivers to work with Modbus RTU to TCP converts. The OPC drivers could not handle the 5 digit RTU addressing. You need to make sure your OPC driver that you try actually works with your equipment. Try before you buy is definite here. Along with some of the complications, like dropping connections due minor network cliches, a real headache and worth a topic all its own, is the ability us tag pickers and the like. The best thing to happen to I/O addressing is the use of Data Objects in the PLC and HMI/SCADA. The other advantage OPC can give you the ability to get more Quality Information on your I/O. Again, check before you buy. In my experience, the only protocol worse than Modbus in the Quality Info department is DDE and that pretty well gone. This still does not help when the Modbus slave still reports stale data like its fresh. No I/O driver can sort that out, you need a heartbeat.

A shout out to all you Equipment manufactures that putting Modbus RTU into equipment because its easy, PLEASE BUILD IN A HEATBEAT us integrators can monitor so we can be sure the data is alive and well.

Also, while you try before you buy, you want your HMI/SCADA to be able to tell the difference between, Good Read, No Read and Bad Read, particularly with a RTU network.

AQ: Self Excited Induction Generator (SEIG)

The output voltage and frequency of a self excited induction generator (SEIG) are totally dependent on the system to which it is attached.

The fact that it is self-excited means that there is no field control and therefore no voltage control, instead the residual magnetism in the rotor is used in conjunction with carefully chosen capacitors at its terminal to form a resonant condition that mutually assists the buildup of voltage limited by the saturation characteristics of the stator. Once this balance point is reached any normal load will cause the terminal voltage to drop.

The frequency is totally reliant upon the speed of the rotor, so unless there is a fixed speed or governor controlled prime mover the load will see a frequency that changes with the prime mover and drops off as the load increases.

The above characteristics are what make SEIGs less than desirable for isolated/standalone operation IF steady well regulated AC power is required. On the other hand if the output is going to be rectified into DC then it can be used. Many of these undesirable “features” go away if the generator is attached to the grid which supplies steady voltage and frequency signals.

The way around all the disadvantages is to use a doubly fed induction generator (DFIG). In addition to the stator connection to the load, the wound rotor is provided with a varying AC field whose frequency is tightly controlled through smart electronics so that a relatively fixed controllable output voltage and frequency can be achieved despite the varying speed of the prime mover and the load, however the costs for the wound rotor induction motor plus the sophisticated control/power electronics are much higher than other forms of variable speed/voltage generation.