Category: Blog

AQ: 1:1 ratio transformer

A 1:1 ratio transformer is primarily used to isolate the primary from the secondary. In small scale electronics it isolates the noise / interference collected from the primary from being transmitted to the secondary. In critical care facilities it can be used as an isolation transformer to isolate the primary grounding of the supply from the critical grounding system of the load (secondary). In large scale applications it is used as a 3-phase delta / delta transformer equipment to isolate the grounding of the source system (primary) from the ungrounded system of the load (secondary).

In a delta – delta system, the equipment grounding is achieved by installing grounding electrodes of grounding resistance not more 25 ohms (maximum or less) as required by the National electrical code. From the grounding electrodes, grounding conductors are distributed with the feeder circuit raceways and branch circuit raceways up to the equipment where the equipment enclosures and non-current carrying parts are grounded (bonded). This scheme is predominant on installations where most of the loads are motors like industrial plants, or on shipboard installations where the systems are mostly delta-delta (ungrounded). In ships, the hull becomes the grounding electrode. Electrical installations like these have ground fault monitoring sensors to determine if there are accidental line to ground connections to the grounding system.

AQ: Sensorless control

I am curious about the definition of “sensorless control”.  When you talk about sensorless control, are you in fact meaning a lack of physical position sensor such as e.g. a magnet plus vane plus hall effect? i.e. not having a unit whose sole objective is position detection.
Is the sensorless control based around alternative methods of measurement or detection to predict position using components that have to exist for the machine to function (such as measuring or detecting voltages or currents in the windings)?

I had long ago wondered about designing a motor, fully measuring its voltage and current profiles and phase firing timings for normal operation (from stationary to full speed full load) using a position sensor for getting the motor to work and to determine the best required phase firing sequences and associated voltage/current profiles then program a microprocessor to replicate the entire required profile such that I would attempt to eliminate the need for any sensing or measurement at all (but I concluded it would come very unstuck for any fault conditions or restarting while it was still turning). So in my mind don’t all such machines require a form of measurement (i.e. some form of “sensing”) to work properly so could never be truly sensorless?

A completely sensor-less control would be completely open-loop, which isn’t reliable with some motors like PMSMs. Even if you knew the switching instants for one ideal case, too many “random” variables could influence the system (just think of the initial position), so that those firing instants could be inappropriate for other situations.

Actually, induction machines, thanks to their inherent stability properties, can be run really sensor-less (i.e. just connected to the grid or in V/f). To be honest, even in the simple grid-connection case there is an overcurrent detection somewhere in the grid, which requires some sensing.

But there can also be said the term sensorless relates to el. motor itself. In another words, it means there are not any sensors “attached” to the el. motor (which does not mean sensors cannot be in the inverter, in such a case). In our company we are using the second meaning, since it indicates no sensor connections are needed between the el. motor and the ECU (inverter).

AQ: Differences of Grounding, Bonding and Ground Fault Protection?

Grounding (or Earthing) – intentionally connecting something to the ground. This is typically done to assist in dissipating static charge and lightning energy since the earth is a poor conductor of electricity unless you get a high voltage and high current.

Bonding is the intentional interconnection of conductive items in order to tie them to the same potential plane — and this is where folks get the confusion to grounding/earthing. The intent of the bonding is to ensure that if a power circuit faults to the enclosure or device, there will be a low-impedance path back to the source so that the upstream overcurrent device(s) will operate quickly and clear the fault before either a person is seriously injured/killed or a fire originates.

Ground Fault Protection is multi-purpose, and I will stay in the Low Voltage (<600 volts) arena. One version, that ends up being seen in most locations where there is low voltage (220 or 120 volts to ground) utilization, is a typically 5-7 mA device that’s looking to ensure that current flow out the hot line comes back on the neutral/grounded conductor; this is to again protect personnel from being electrocuted when in a compromised lower resistance condition. Another version is the Equipment Ground Fault Protection, and this is used for resistive heat tracing or items like irrigation equipment; the trip levels here are around 30 mA and are more for prevention of fires. The final version of Ground Fault Protection is on larger commercial/industrial power systems operating with over 150 volts to ground/neutral (so 380Y/220, 480Y/277 are a couple typical examples) and — at least in the US and Canada — where the incoming main circuit interrupting device is at least 1000 amps (though it’s not a bad idea at lower, it’s just not mandated); here it’s used to ensure that a downstream fault is cleared to avoid fire conditions or the event of ‘Burn Down’ since there’s sufficient residual voltage present that the arc can be kept going and does not just self-extinguish.

In the Medium and High Voltage areas, the Ground Fault Protection is really just protective relaying that’s monitoring the phase currents and operating for an imbalance over a certain level that’s normally up to the system designer to determine.

AQ: PMBLDC motor in MagNet

You can build it all in MagNet using the circuit position controlled switch. You will have to use motion analysis in order to use the position controlled switches. You can also use the back EMF information to find what the optimal position for the rotor should be with respect to the stator field. The nice thing about motion is that even if you do not have the rotor in the proper position you can set the reference at start up.

Another way of determining that position is to find the maximum torque with constant current (with the right phase relationship between phases of course) and plot torque as a function of rotor position. The peak will correspond to the back EMF waveform information.

If you want to examine the behavior of the motor with an inverter then another approach works very well. There are 2 approaches you can use with MagNet: 1) co-simulation, and, 2) reduced order models. The former can be used with matlab with Simulink or Simpower Systems and runs both Matlab and MagNet simultaneously. The module linking the two systems allows 2 way communication between the modules hence sharing information. The latter requires that you get the System Model Generator (SMG) from Infolytica. The SMG will create a reduced order model of you motor which can then be used in Matlab/Simulink or any VHDL-AMS capable system simulator. A block to interpret the data file is required and is available when you get the SMG. Reduced order models are very interesting since they can very accurately simulate the motor and hook up to complex control circuits.

AQ: SCADA & HMI

SCADA will have a set of KPI’s that are used by the PLCs/PACs/RTUs as standards to compare to the readings coming from the intelligent devices they are connected to such as flowmeters, sensors, pressure guages, etc.

HMI is a graphical representation of your process system that is provided both the KPI data and receives the readings from the various devices through the PLC/PAC/RTUs. For example you may be using a PLC that has 24 i/o blocks that are connected to various intelligent devices that covers part of your water treatment plant. The HMI software provides the operator with a graphical view of the treatment plant that you customize so that your virtual devices and actual devices are synchronized with the correct i/o blocks in your PLC. So, when an alarm is triggered, instead of the operator receiving a message that the 15th i/o block on PLC 7 failed, you could see that the pressure guage in a boiler reached maximum safety level, triggering a shutdown and awaiting operator approval for restart.

Here is some more info I got from my colleague who is the expert in the HMI market, this is a summary from the scope of his last market study which is about a year old.

HMI software’s complexity ranges from a simple PLC/PAC operating interface but as plant systems have evolved, HMI functionality and importance has as well. HMI is an integral component of a Collaborative Production Management (CPM) system; simply you can define that as the integration of Enterprise, Operations, and Automation software into a single system. Collaborative Production Systems (CPS) require a common HMI software solution that can visualize the data and information required at this converged point of operations and production management. HMI software is the bridge between your Automation Systems and Operations Management systems.

An HMI software package typically performs functions such as process visualization and animation, data acquisition and management, process monitoring and alarming, management reporting, and database serving to other enterprise applications. In many cases, HMI software package can also perform control functions such as basic regulatory control, batch control, supervisory control, and statistical process control.

“Ergonometrics,” where increased ergonomics help increase KPI and metric results, requires deploying the latest HMI software packages. These offer the best resolution to support 3D solutions and visualization based on technologies such as Microsoft Silverlight. Integrating real-time live video into HMI software tools provide another excellent opportunity to maximize operator effectiveness. Live video provides a “fourth dimension” for intelligent visualization and control solutions. Finally, the need for open and secure access to data across the entire enterprise drives the creation of a single environment where these applications can coexist and share information. This environment requires the latest HMI software capable of providing visualization and intelligence solutions for automation, energy management, and production management systems.

AQ: AM & FM radio

For AM & FM radio & some data communications adding the QP filter make sense.
Now that broadband, wifi, data communications of all sizes & flavours exist – any peak noise is very likely to cause interuptions & loss of integrity of data – all systems are being ‘cost reduced’ ensuring that they will be more susceptible to noise.
I can understand the reasons for the tightening of the regulations.
BUT, it links in to the other big topic of the moment – the non-linearity of managers.
William is obviously his own manager – I bet if his customer was to ask him to spend an indefinite amount of time fixing all the root causes to meet the spec perfectly without any additional cost it would be a different matter.

Unfortunately for most of us the realities of supervisors wanting projects closed & engineering costs minimized we have to be careful in the choice of phrasing.
Any suggestion that one prototype is ‘passing’ suddenly can be translated to job finished, & even in our case where the lab manager mostly understands, his boss rarely does & the accountant above him – not at all.

It gets worse than that – at the beginning of a project (RFQ) – the question is “how long will EMC take to fix?” with the expectation if a deterministic answer; the usual response of a snort of derision & how long is a piece of string generally translates to 2 weeks & once set in stone becomes a millstone (sorry mile-stone).

We already have a number of designs that while not intentionally using dithering, do use boundary mode PFC circuits which automatically force the switch frequency to vary over the mains cycle. These may become problematic at some future variation of the wording of the EMC specs.

While I have a great deal of sympathy for the design it right first time approach, the bottom line for any company is – it meets the requirement (today) – sell it!!

AQ: Electronic industry standards

You know standards for the electronic industry have been around for decades, so each of the interfaces we have discussed does have a standard. Those standards may be revised but will still be used by all segments of our respective engineering disciplines.

Note for example back in the early 1990s many big companies HP, Boeing, Honeywell … formed a standards board and developed the Software standards( basic recommendations) for software practices for programming of flight systems. It was not the government it was the industry that took on the effort. The recommendations are still used. So an effort is first needed by a meeting of the minds in the industry.

Now we have plenty of standards on the books for the industry, RS-422, RS-232, 802.1 … and the list goes on and on. The point is most of the companies are conforming to standards that may have been the preferred method when that product was developed.

In the discussion I have not seen what the top preferred interfaces are. I know in many of the developments I have been involved in we ended up using protocol converters, Rs-232 to 802.3, 422 to 485 … that’s the way it’s been in control systems, monitoring systems, Launch systems and factory automation. And in a few projects no technology existed for the interface layer, had to build from scratch. Note the evolution of ARPA net to Ethernet to the many variations that are available today.

So for the short hall if I wanted to be more comparative I would use multiple interfaces on my hardware say usb, wireless, and 422. Note for new developments. With the advancement in PSOCS and other forms of program logic interface solutions are available to the engineer.

Start the interface standards with the system engineers and a little research on the characteristic of the many automation components and select the ones that comply with the goals and the ones that don’t will eventually become obsolete. If anything, work on some system standards. If the customer is defining the system loan him a systems engineer, and make the case for the devises your system or box can support, if you find your product falls short build a new version. Team with other automation companies on projects and learn from each other. It’s easy to find issues as to why you can’t succeed because of product differences, so break down the issues into manageable objectives and solve one issue at a time. As they say divide and concur.

AQ: Spread spectrum of power supply

Having lead design efforts for very sensitive instrumentation with high frequency A/D converters with greater than 20-bits of resolution my viewpoint is mainly concerned about the noise in the regulated supply output. In these designs fairly typical 50-mV peak-to-peak noise is totally unacceptable and some customers cannot stand 1-uVrms noise at certain frequencies. While spread spectrum may help the power supply designer it may also raise havoc with the user of the regulated output. The amplitude of the switching spikes (input or output) as some have said, are not reduced by dithering the switching frequency. Sometimes locking the switching time, where in time, it does not interfere with the circuits using the output can help. Some may also think this is cheating but as was said it is very difficult getting rid of most 10megHz noise. This extremely difficulty applies for many of the harmonics above 100kHz. (For beginners who think that being 20 to 100 times higher than the LC filter will reduce the switching noise by 40 to 200 are sadly wrong as once you pass 100kHz many capacitors and inductors have parasitics making it very hard to get high attenuation in one LC stage and often there is not room for more. More inductors often introduce more losses as well.) We should be reducing all the noise we can and then use other techniques as necessary. With spread spectrum becoming more popular we may soon see regulation on its total noise output as well.

One form of troublesome noise is common mode noise coming out of the power inputs to the power supply. If this is present on the power input to the power supply it is very likely it is also present in the “regulated” output power if floating. Here careful design of the switching power magnetics and care in the layout can help minimize this noise enough, that filters may be able to keep the residual within acceptable limits. Ray discusses some of this in his class but many non-linear managers frequently do not think it is reasonable or necessary for the power supply design engineer to be involved in layout or location of copper traces. Why not, the companies that sell the multi-$100K+ software told their bosses the software automatically optimizes and routs the traces.

Spread spectrum is a tool that may be useful to some but not to all. I hope the sales pitch for those control chips do not lull unsuspecting new designers into complacency about their filter requirements.

AQ: Home automation concept

The concept of home automation on a global scale is a good concept. How to implement such a technology on a global scale is an interesting problem, or I should say issues to be resolved. Before global approval can be accomplished the product of home automation may need a strategy that starts with a look at companies that have succeeded in getting global approval of their products.

If we look at what companies that have the most products distributed around the world we see that Intel is one of these companies. What’s interesting is that this company has used automation in their Fabs for decades. This automation has allowed them to produce their products faster and cheaper than the rest of the industry. The company continues to invest in automation and the ability to evolve with technology and management. We have many companies that compete on the world stage; I don’t think many of these companies distribute as much product. So to compete at a level to make home automation accepted and to accomplish global acceptance the industry and the factories have to evolve to compete. That mission by the automation can be accomplished by adapting a strategy that updates their automation in their factories, stop using products that were used and developed in the 1970s (another way of saying COTS) and progress to current and new systems. A ten years old Factory may be considered obsolete if the equipment inside is as old as the factory.

Now for cost, when I thank of PLC or commercial controllers I see a COTS product that may be using obsolete parts that are not in production any more or old boards. So I see higher cost for manufacturing, a reduction in reliability. Now many procurement people evaluate risk in such a way that may rate older boards lower in risk for the short term, not a good evaluation for the long term. The cost is a function of how much product can be produced at the lowest cost and how efficient and competitive the company that produces the product. So time is money. The responsibility for cost is the company and the ability to produce a competitive product, not the government.

Now into control systems and safety, if the automation system is used in the house safety has to be a major consideration. I know at Intel Fabs if you violate any safety rule you won’t be working at that company long. To address safety the product must conform to the appropriate standards. Safety should be a selling point for home automation. Automation engineers should get and remember safety is one of the main considerations for an engineer. If someone gets hurt or killed because of a safety issue the first person looked at is the engineer.

Now 30% energy saving in my book is not enough, 35 to 40 percent should be a goal. Now solar cells have improved but the most efficient in the south west US. The Sterling engines are 1960 designs and use rare gases such as helium which may not be a renewable resource, Wind generators need space and are electromechanical so reliability and maintenance needs improving.

Now on to the interface standards, most modern factories that produce processors use the Generic equipment Manufacture standard, good deal works. As far as what and when to uses a standard interface, on BOX produced by one company may use RE-422 where another company may use RS 485 so the system engineer should resolve these issues before detailed design starts. Check with IEEE. Or you may be able to find the spec at every spec.com this is a good place to look for some of the specs needed.

So I conclude, many issues exist, and when broken down home automation is viable and needs a concerted effort and commitment from at least the companies and management that produce products for automation and a different model for manufacturing and growing the home systems.
Home automation with a focus on energy savings as a goal is a good thing. We have a lot of work to ma

AQ: Signal processing and communications theory

Coming from a signal processing and communications theory background, but with some experience in power design, I can’t resist the urge to chime in with a few remarks.

There are many engineering methods to deal with sources of interference, including noise from switching converters, and spread spectrum techniques are simply one more tool that may be applied to achieve a desired level of performance.

Spread spectrum techniques will indeed allow a quasi-peak EMC test to be passed when it might otherwise be failed. Is this an appropriate application for this technique?

The quasi-peak detector was developed with the intention to provide a benchmark for determining the psycho-acoustic “annoyance” of an interference on analog communications systems (more specifically, predominantly narrow band AM type communication systems). Spread spectrum techniques resulting in a reduced QP detector reading will almost undoubtedly reduce the annoyance the interference would have otherwise presented to the listener. Thus the intent was to reduce the degree of objectionable interference and the application of spread spectrum meets that goal. This doesn’t seem at all like “cheating” to me; the proper intent of the regulatory limit is still being met.

On the other hand, as earlier posters have pointed out, the application of spectrum spreading does nothing to reduce the total power of the interference but simply spreads it over a wider bandwidth. Spreading the noise over a wider bandwidth provides two potential benefits. The most obvious benefit occurs if the victim of the interference is inherently narrowband. Spreading the spectrum of the interference beyond the victim bandwidth provides an inherent improvement in signal to noise ratio. A second, perhaps less obvious, benefit is that the interference becomes more noise like in its statistics. Noise like interference is less objectionable to the human ear than impulsive noise but it should also be recognized that it is less objectionable to many digital transmission systems too.

However, from an information theoretic perspective the nature of the interference doesn’t matter, but rather only the signal to noise ratio matters. Many modern communication systems employ wide bandwidths. Furthermore they employ powerful adaptive modulation and coding schemes that will effectively de-correlate interference sources (makes the effect noise like); these receivers don’t care whether the interference is narrow band or wide band in terms of bit error rate (BER) and they will be effected largely the same by a given amount of interference power (in theory identically the same, but implementation limitations still provide some gap to the theoretical limits).

It is worth noting however that while spectrum spreading techniques do not reduce the interference power they don’t make it any worse either. Thus these techniques may (I would argue legitimately as per above) help with passing a test which specified the CISPR Quasi-Peak detector and should not make the performance on a test specifying the newer CISPR RMS+Average test any worse.

It should always be an engineering goal to keep interference to a reasonable minimum and I would agree that it is aesthetically most satisfying (and often cheapest and most simple) to achieve this objective by somehow reducing the interference at source (this is a wide definition covering aspects of SMPS design from topology selection to PCB layout and beyond). However, the objective to control noise at the source shouldn’t eliminate alternative methods from consideration in any given application.

There will always be the question of how good is good enough and it is the job of various regulatory bodies to define these requirements and to do so robustly enough such that the compliance tests can’t be “gamed”.