Category: Blog

AQ: Avoid voltage drop influence

My cable size and transformer size should give me maximum 3% on the worst 6% to 10%. If it is the single only equipment on the system then maybe you can tolerate 15%. If not, dip factor may affect sensitive equipment and lighting.

This is very annoying for office staff each time a machine starts lights are dimming. It does not matter what standard you quote I cannot accept 10%- 15% make precise calculation and add a 10% tolerance to avoid.

In most cases, this problem comes from cable under sizing so we have to settle with a Standard giving 15% Max.

Just recently I had to order a transformer and cable change for a project which was grossly undersized.
I have had to redesign the electrical portion of a conveyor and crushing system to bring the system design into compliance with applicable safety codes. The site was outdoor at a mine in Arizona where ambient temperatures reach 120F. The electrical calculation and design software did not include any derating of conductor sizes for cable spacing and density within cable trays, number of conductors per raceway, ambient temperature versus cable temperature rating, etc. Few of the cables had been increased in size to compensate for voltage drop between the power source and the respective motor or transformer loads.

Feeder cables to remote power distribution centers were too small, as voltage drop had not been incorporated in the initial design. The voltage drop should not be greater than 3%, as there will be other factors of alternating loads, system voltage, etc. that may result in an overall drop of 5%.

The electrical system had to be re-designed with larger cables, transformer, MCCS, etc, as none of the design software factors in the required deratings specified in the National Electric Code NFPA70 nor the Canadian Electric Code, which references the NEC.

AQ: Experience: Flyback

My first SMPS design was a multiple output flyback. This was in 1976, when there were no PWM controllers. So I used a 556 (1/2 osc -30 kHz, and 1/2 PWM generator) plus used a 3904 NPN where the VBE was the reference and also provided gain for the error amp function. I hap-hazardly wound the windings on a 25 mm torroid. It ranglike a tank circuit. I quickly abandoned the transformer and after a year, and many hours on the bench, I had a production-grad SMPS.
Since it went into a private aircraft weather reader system, I needed an exterier SMPS which was a buck converter. I used an LM105 linear regulator with positive feedback to make it oscillate (one of nationals ap notes). It worked, but I soon learned that the electrolytic capacitors lost all of their capacitance at -25 deg C. It later worked with military-grade capacitors.

I had small hills of dead MOSFETs and the directly attached controllers. When the first power MOSFETs emerged in 1979, I blew-up so many that I almost wrote them off. They had some real issues with D-S voltage overstress. They have come a long way since.

As far as very wide range flyback converter, please dig-up AN1327 on the ONSEMI website. This describes a control strategy (fixed off-time, variable on-time) and the transformer design.
The processor to that was a 3W flyback that drove 3 floating gate drive circuits and had an input range of 85 VAC to 576 VAC. It was for a 3 phase industrial motor drive. The toughest area was the transformer. To meet the isolation requirements of the UL, and IEC, it would have required a very large core, and bobbin plus a lot of tape. The PCB had the dimensions of 50 mm x 50 mm and 9 mm thick A magnetics designer named Jeff Brown from Cramerco.com is now my magnetics God. He designed me a custom core and bobbin that was 10 mm high on basically an EF15 sized core. The 3 piece bobbin met all of the spacing requirements without tape. The customer was expecting a 2 – 3 tier product offering for the different voltage ranges, but instead could offer only one. They were thrilled.

Can be done, watch your breakdown voltages, spacings and RMS currents. I found that around 17 -20 watts is about the practical limit for an EF40 core before the transformer RMS currents get too high.

AQ: Impedance analyzer

A graphical impedance analyzer with good phase resolution is a must. Some brands have all the bells and whistles, but not the phase resolution necessary to accurately measure high Q (100+) components over the instrument’s full frequency range (which should extend at least into the low megahertz). Of course the Agilent 4294A fills the performance bill, but with a $40k+ purchase bill, it also empties the budget (like similar high end new models from Wayne Kerr). Used models from Wayne Kerr work very well, and can be had for under $10K but they are very heavy and clunky with very ugly (but still useable) displays.

Perhaps the best value may be the Hioki IM3570, which works extremely well with superior phase resolution, has a very nice color touch screen display (with all the expected engineering graphing formats), is compact and lightweight, and costs around $10k new. Its only downside is that its fan is annoyingly loud and does not reduce its noise output during instrument idle.

But where should an impedance analyzer rank on the power electronics design engineer’s basic equipment list (and why)?

Beyond the basic lower cost necessities such as DMMs, bench power supplies, test leads, soldering stations, etcetera, I would rank a good impedance analyzer second only to a good oscilloscope. The impedance analyzer allows one to see all of a component’s secondary impedance characteristics and to directly compare similar components. Often overlooked is the information such an instrument can provide by examining component assemblies in situ in a circuit board assembly. Sometimes this can be very revealing of hidden, but influential layout parasitics.

Equally importantly, an impedance analyzer allows accurate SPICE models to be quickly formulated so that simulation can be used as a meaningful design tool. Transformer magnetizing and leakage inductances can be measured as well as inter-winding capacitance and frequency dependent resistive losses. From these measurements and with proper technique, a model can be formulated that nearly exactly matches the real part. Not only does this allow power circuits and control loops to be initially designed entirely by simulation (under the judicious eye of experience, of course), but it even allows one to effectively simulate the low frequency end of a design’s EMI performance.

AQ: Maximum permissible value of grounding resistance

For grounding in the US it typically goes like this: Utility transformer has one ground rod. Then from the utility to the building you typically have three phase conductors and one neutral/ground conductor landing on the main panel with the utility meter. At that point we drive a ground rod. And we bond the ground rod to the water pipes (generally). And we bond the ground rod to the building steel (generally). Water pipes are generally very well connected to ground and the building steel is a nice user ground. With all these connections you typically have a good ground reference. Now, if that utility neutral wire is bad or too small, then you can have poor reference to ground between phases (a normal sign of that is flickering lights even when the load is not changing much).

Grounding impedance of the transformer and building ground rods is mainly for voltage stabilization and under normal conditions should have nothing to do with our return ground fault current. See NEC 250.1 (5) “The earth shall not be considered as an effective ground-fault current path.”

Let’s say we have a system with the building transformer and panel to ground impedance of 1000 ohms (we built this place on solid rock). Okay, we have a poor 277V reference and we will have flickering lights (that 277 voltage will bounce all over the place). But now, in our system above, if we take a phase wire and connect it to a motor shell, which is also connected to our grounding wire, will the upstream breaker trip? The answer is yes. If our phase-to-ground fault impedance is low we will trip the upstream feeder breaker no matter what the main panel ground rod impedance is. My point here is that is does not matter what our transformer grounding is or what our panel grounding is (ground rod is not important in this case). The breaker must trip because our circuit is complete between the phase conductor and the transformer wye leg.

As long as we have a utility main transformer to panel neutral conductor of proper size to handle our fault current and we size our grounding conductors properly and they are properly connected at each subpanel and each motor in our case, we will apply nearly full phase to ground voltage because our real ground fault path is from that motor, through the grounding conductor, through our sub panels, to our main panel, than back to the transformer. That ground current must flow through our building grounding conductor to the main panel and back to the transformer through that utility neutral wire which is connected to the wye leg of the transformer. And it does not matter what the transformer to ground rod connection is. We could take that out the transformer to ground rod connection and the main panel to ground rod connection completely and we are still connecting that phase wire, through the motor metal to the grounding conductor back to the wye leg of that utility transformer, which will complete our electrical circuit. Current will flow and the breaker will trip.

AQ: Experience: Design

I tell customers that at least 50% of the design effort is the layout and routing by someone who knows what they are doing. Layer stackup is very critical for multiple layer designs. Yes, a solid design is required. But the perfect design goes down in flames with a bad layout. Rudy Severns said it best in one of his early books that you have to “think RF” when doing a layout. I have followed this philosophy for years with great success. Problems with a layout person who wants to run the auto route or doesn’t understand analog layout? No problem, you, as the design engineer, do not have to release the design until it is to your satisfaction.

I have had Schottky diodes fail because the PIV was exceeded due to circuit inductance causing just enough of a very high frequency ring (very hard to see on a scope) to exceed the PIV. Know your circuit Z’s, keep your traces short and fat.

Fixed a number of problems associated with capacitor RMS ratings on AC to DC front ends. Along with this is the peak inrush current for a bridge rectifier at turn on and, in some cases, during steady state. Unit can be turned on at the 90 deg phase angle into a capacitive load. This must be analyzed with assumptions for input resistance and/or a current inrush circuit must be added.

A satellite power supply had 70 deg phase margin on the bench, resistive load, but oscillated on the real load. Measured the loop using the AP200 on the load and the phase margin was zero. Test the power supply on the real load before going to production and then a random sampling during the life of the product.

I used MathCAD for designs until the average models came out for SMPS. Yes, the equations are nice to see and work with but they are just models none the less. I would rather have PsPice to the math while I pay attention to the models used and the overall design effort. Creating large closed form equations is wrought with pitfalls, trapdoors, and landmines. Plus, hundreds of pages of MathCAD, which I have done, is hard to sell to the customer during a design review (most attendees drift off after page 1). The PsPice schematics are more easily sold and then modified as needed with better understanding all around.

AQ: Spread spectrum of power supply

Having lead design efforts for very sensitive instrumentation with high frequency A/D converters with greater than 20-bits of resolution my viewpoint is mainly concerned about the noise in the regulated supply output. In these designs fairly typical 50-mV peak-to-peak noise is totally unacceptable and some customers cannot stand 1-uVrms noise at certain frequencies. While spread spectrum may help the power supply designer it may also raise havoc with the user of the regulated output. The amplitude of the switching spikes (input or output) as some have said, are not reduced by dithering the switching frequency. Sometimes locking the switching time, where in time, it does not interfere with the circuits using the output can help. Some may also think this is cheating but as was said it is very difficult getting rid of most 10megHz noise. This extremely difficulty applies for many of the harmonics above 100kHz. (For beginners who think that being 20 to 100 times higher than the LC filter will reduce the switching noise by 40 to 200 are sadly wrong as once you pass 100kHz many capacitors and inductors have parasitics making it very hard to get high attenuation in one LC stage and often there is not room for more. More inductors often introduce more losses as well.) We should be reducing all the noise we can and then use other techniques as necessary. With spread spectrum becoming more popular we may soon see regulation on its total noise output as well.

One form of troublesome noise is common mode noise coming out of the power inputs to the power supply. If this is present on the power input to the power supply it is very likely it is also present in the “regulated” output power if floating. Here careful design of the switching power magnetics and care in the layout can help minimize this noise enough, that filters may be able to keep the residual within acceptable limits. Ray discusses some of this in his class but many non-linear managers frequently do not think it is reasonable or necessary for the power supply design engineer to be involved in layout or location of copper traces. Why not, the companies that sell the multi-$100K+ software told their bosses the software automatically optimizes and routs the traces.

Spread spectrum is a tool that may be useful to some but not to all. I hope the sales pitch for those control chips do not lull unsuspecting new designers into complacency about their filter requirements.

AQ: Home automation concept

The concept of home automation on a global scale is a good concept. How to implement such a technology on a global scale is an interesting problem, or I should say issues to be resolved. Before global approval can be accomplished the product of home automation may need a strategy that starts with a look at companies that have succeeded in getting global approval of their products.

If we look at what companies that have the most products distributed around the world we see that Intel is one of these companies. What’s interesting is that this company has used automation in their Fabs for decades. This automation has allowed them to produce their products faster and cheaper than the rest of the industry. The company continues to invest in automation and the ability to evolve with technology and management. We have many companies that compete on the world stage; I don’t think many of these companies distribute as much product. So to compete at a level to make home automation accepted and to accomplish global acceptance the industry and the factories have to evolve to compete. That mission by the automation can be accomplished by adapting a strategy that updates their automation in their factories, stop using products that were used and developed in the 1970s (another way of saying COTS) and progress to current and new systems. A ten years old Factory may be considered obsolete if the equipment inside is as old as the factory.

Now for cost, when I thank of PLC or commercial controllers I see a COTS product that may be using obsolete parts that are not in production any more or old boards. So I see higher cost for manufacturing, a reduction in reliability. Now many procurement people evaluate risk in such a way that may rate older boards lower in risk for the short term, not a good evaluation for the long term. The cost is a function of how much product can be produced at the lowest cost and how efficient and competitive the company that produces the product. So time is money. The responsibility for cost is the company and the ability to produce a competitive product, not the government.

Now into control systems and safety, if the automation system is used in the house safety has to be a major consideration. I know at Intel Fabs if you violate any safety rule you won’t be working at that company long. To address safety the product must conform to the appropriate standards. Safety should be a selling point for home automation. Automation engineers should get and remember safety is one of the main considerations for an engineer. If someone gets hurt or killed because of a safety issue the first person looked at is the engineer.

Now 30% energy saving in my book is not enough, 35 to 40 percent should be a goal. Now solar cells have improved but the most efficient in the south west US. The Sterling engines are 1960 designs and use rare gases such as helium which may not be a renewable resource, Wind generators need space and are electromechanical so reliability and maintenance needs improving.

Now on to the interface standards, most modern factories that produce processors use the Generic equipment Manufacture standard, good deal works. As far as what and when to uses a standard interface, on BOX produced by one company may use RE-422 where another company may use RS 485 so the system engineer should resolve these issues before detailed design starts. Check with IEEE. Or you may be able to find the spec at every spec.com this is a good place to look for some of the specs needed.

So I conclude, many issues exist, and when broken down home automation is viable and needs a concerted effort and commitment from at least the companies and management that produce products for automation and a different model for manufacturing and growing the home systems.
Home automation with a focus on energy savings as a goal is a good thing. We have a lot of work to ma

AQ: Signal processing and communications theory

Coming from a signal processing and communications theory background, but with some experience in power design, I can’t resist the urge to chime in with a few remarks.

There are many engineering methods to deal with sources of interference, including noise from switching converters, and spread spectrum techniques are simply one more tool that may be applied to achieve a desired level of performance.

Spread spectrum techniques will indeed allow a quasi-peak EMC test to be passed when it might otherwise be failed. Is this an appropriate application for this technique?

The quasi-peak detector was developed with the intention to provide a benchmark for determining the psycho-acoustic “annoyance” of an interference on analog communications systems (more specifically, predominantly narrow band AM type communication systems). Spread spectrum techniques resulting in a reduced QP detector reading will almost undoubtedly reduce the annoyance the interference would have otherwise presented to the listener. Thus the intent was to reduce the degree of objectionable interference and the application of spread spectrum meets that goal. This doesn’t seem at all like “cheating” to me; the proper intent of the regulatory limit is still being met.

On the other hand, as earlier posters have pointed out, the application of spectrum spreading does nothing to reduce the total power of the interference but simply spreads it over a wider bandwidth. Spreading the noise over a wider bandwidth provides two potential benefits. The most obvious benefit occurs if the victim of the interference is inherently narrowband. Spreading the spectrum of the interference beyond the victim bandwidth provides an inherent improvement in signal to noise ratio. A second, perhaps less obvious, benefit is that the interference becomes more noise like in its statistics. Noise like interference is less objectionable to the human ear than impulsive noise but it should also be recognized that it is less objectionable to many digital transmission systems too.

However, from an information theoretic perspective the nature of the interference doesn’t matter, but rather only the signal to noise ratio matters. Many modern communication systems employ wide bandwidths. Furthermore they employ powerful adaptive modulation and coding schemes that will effectively de-correlate interference sources (makes the effect noise like); these receivers don’t care whether the interference is narrow band or wide band in terms of bit error rate (BER) and they will be effected largely the same by a given amount of interference power (in theory identically the same, but implementation limitations still provide some gap to the theoretical limits).

It is worth noting however that while spectrum spreading techniques do not reduce the interference power they don’t make it any worse either. Thus these techniques may (I would argue legitimately as per above) help with passing a test which specified the CISPR Quasi-Peak detector and should not make the performance on a test specifying the newer CISPR RMS+Average test any worse.

It should always be an engineering goal to keep interference to a reasonable minimum and I would agree that it is aesthetically most satisfying (and often cheapest and most simple) to achieve this objective by somehow reducing the interference at source (this is a wide definition covering aspects of SMPS design from topology selection to PCB layout and beyond). However, the objective to control noise at the source shouldn’t eliminate alternative methods from consideration in any given application.

There will always be the question of how good is good enough and it is the job of various regulatory bodies to define these requirements and to do so robustly enough such that the compliance tests can’t be “gamed”.

AQ: Friendly system without technicians diagnose

How to make our systems so friendly that they do not need technicians to help diagnose problems? Most of the more obvious answers have been well documented here but to emphasize these points it is normally the case that diagnostics and alarms can dominate the amount of code constructed for an application. That is the amount of code required to fully diagnose an application may be as much if not more than the code required to make the application work in the first place!

I have seen HMIs with diagnostic screens showing an animated version of the Cause & Effects code that allows users to see where the trip condition is. I have also seen screens depicting prestart checks, Operator guides, etc all animated to help the user. Allen-Bradley even have a program viewer that can be embedded on an HMI screen but you would probably need a technician to understand the code.

From a control system problem perspective it is inevitable that you would need to use the vendor based diagnostics to troubleshoot your system. Alarms may indicate that there is a system related problem but it is unlikely that you could build a true diagnostics application in your code to indicate the full spectrum of problems you may encounter. For example if the CPU dies or the memory fails there is nothing left to tell you what the problem is 🙂

From an application/process problem perspective if you have to resort to a technician to go into the code to determine your problem then your alarm and diagnostics code is not comprehensive enough! Remember your alarming guidelines an Operator must be able to deal with an abnormal situation within a given time frame, if the Operator has to rely on a Technician to wade through code then you really do need to perform an alarm review and build a better diagnostics system.

AQ: OPC drivers advantage

A few years back, I had a devil of time getting some OPC Modbus TCP drivers to work with Modbus RTU to TCP converts. The OPC drivers could not handle the 5 digit RTU addressing. You need to make sure your OPC driver that you try actually works with your equipment. Try before you buy is definite here. Along with some of the complications, like dropping connections due minor network cliches, a real headache and worth a topic all its own, is the ability us tag pickers and the like. The best thing to happen to I/O addressing is the use of Data Objects in the PLC and HMI/SCADA. The other advantage OPC can give you the ability to get more Quality Information on your I/O. Again, check before you buy. In my experience, the only protocol worse than Modbus in the Quality Info department is DDE and that pretty well gone. This still does not help when the Modbus slave still reports stale data like its fresh. No I/O driver can sort that out, you need a heartbeat.

A shout out to all you Equipment manufactures that putting Modbus RTU into equipment because its easy, PLEASE BUILD IN A HEATBEAT us integrators can monitor so we can be sure the data is alive and well.

Also, while you try before you buy, you want your HMI/SCADA to be able to tell the difference between, Good Read, No Read and Bad Read, particularly with a RTU network.