Author: ABBdriveX

AQ: Signal processing and communications theory

Coming from a signal processing and communications theory background, but with some experience in power design, I can’t resist the urge to chime in with a few remarks.

There are many engineering methods to deal with sources of interference, including noise from switching converters, and spread spectrum techniques are simply one more tool that may be applied to achieve a desired level of performance.

Spread spectrum techniques will indeed allow a quasi-peak EMC test to be passed when it might otherwise be failed. Is this an appropriate application for this technique?

The quasi-peak detector was developed with the intention to provide a benchmark for determining the psycho-acoustic “annoyance” of an interference on analog communications systems (more specifically, predominantly narrow band AM type communication systems). Spread spectrum techniques resulting in a reduced QP detector reading will almost undoubtedly reduce the annoyance the interference would have otherwise presented to the listener. Thus the intent was to reduce the degree of objectionable interference and the application of spread spectrum meets that goal. This doesn’t seem at all like “cheating” to me; the proper intent of the regulatory limit is still being met.

On the other hand, as earlier posters have pointed out, the application of spectrum spreading does nothing to reduce the total power of the interference but simply spreads it over a wider bandwidth. Spreading the noise over a wider bandwidth provides two potential benefits. The most obvious benefit occurs if the victim of the interference is inherently narrowband. Spreading the spectrum of the interference beyond the victim bandwidth provides an inherent improvement in signal to noise ratio. A second, perhaps less obvious, benefit is that the interference becomes more noise like in its statistics. Noise like interference is less objectionable to the human ear than impulsive noise but it should also be recognized that it is less objectionable to many digital transmission systems too.

However, from an information theoretic perspective the nature of the interference doesn’t matter, but rather only the signal to noise ratio matters. Many modern communication systems employ wide bandwidths. Furthermore they employ powerful adaptive modulation and coding schemes that will effectively de-correlate interference sources (makes the effect noise like); these receivers don’t care whether the interference is narrow band or wide band in terms of bit error rate (BER) and they will be effected largely the same by a given amount of interference power (in theory identically the same, but implementation limitations still provide some gap to the theoretical limits).

It is worth noting however that while spectrum spreading techniques do not reduce the interference power they don’t make it any worse either. Thus these techniques may (I would argue legitimately as per above) help with passing a test which specified the CISPR Quasi-Peak detector and should not make the performance on a test specifying the newer CISPR RMS+Average test any worse.

It should always be an engineering goal to keep interference to a reasonable minimum and I would agree that it is aesthetically most satisfying (and often cheapest and most simple) to achieve this objective by somehow reducing the interference at source (this is a wide definition covering aspects of SMPS design from topology selection to PCB layout and beyond). However, the objective to control noise at the source shouldn’t eliminate alternative methods from consideration in any given application.

There will always be the question of how good is good enough and it is the job of various regulatory bodies to define these requirements and to do so robustly enough such that the compliance tests can’t be “gamed”.

AQ: Spread spectrum of power supply

Having lead design efforts for very sensitive instrumentation with high frequency A/D converters with greater than 20-bits of resolution my viewpoint is mainly concerned about the noise in the regulated supply output. In these designs fairly typical 50-mV peak-to-peak noise is totally unacceptable and some customers cannot stand 1-uVrms noise at certain frequencies. While spread spectrum may help the power supply designer it may also raise havoc with the user of the regulated output. The amplitude of the switching spikes (input or output) as some have said, are not reduced by dithering the switching frequency. Sometimes locking the switching time, where in time, it does not interfere with the circuits using the output can help. Some may also think this is cheating but as was said it is very difficult getting rid of most 10megHz noise. This extremely difficulty applies for many of the harmonics above 100kHz. (For beginners who think that being 20 to 100 times higher than the LC filter will reduce the switching noise by 40 to 200 are sadly wrong as once you pass 100kHz many capacitors and inductors have parasitics making it very hard to get high attenuation in one LC stage and often there is not room for more. More inductors often introduce more losses as well.) We should be reducing all the noise we can and then use other techniques as necessary. With spread spectrum becoming more popular we may soon see regulation on its total noise output as well.

One form of troublesome noise is common mode noise coming out of the power inputs to the power supply. If this is present on the power input to the power supply it is very likely it is also present in the “regulated” output power if floating. Here careful design of the switching power magnetics and care in the layout can help minimize this noise enough, that filters may be able to keep the residual within acceptable limits. Ray discusses some of this in his class but many non-linear managers frequently do not think it is reasonable or necessary for the power supply design engineer to be involved in layout or location of copper traces. Why not, the companies that sell the multi-$100K+ software told their bosses the software automatically optimizes and routs the traces.

Spread spectrum is a tool that may be useful to some but not to all. I hope the sales pitch for those control chips do not lull unsuspecting new designers into complacency about their filter requirements.

AQ: AM & FM radio

For AM & FM radio & some data communications adding the QP filter make sense.
Now that broadband, wifi, data communications of all sizes & flavours exist – any peak noise is very likely to cause interuptions & loss of integrity of data – all systems are being ‘cost reduced’ ensuring that they will be more susceptible to noise.
I can understand the reasons for the tightening of the regulations.
BUT, it links in to the other big topic of the moment – the non-linearity of managers.
William is obviously his own manager – I bet if his customer was to ask him to spend an indefinite amount of time fixing all the root causes to meet the spec perfectly without any additional cost it would be a different matter.

Unfortunately for most of us the realities of supervisors wanting projects closed & engineering costs minimized we have to be careful in the choice of phrasing.
Any suggestion that one prototype is ‘passing’ suddenly can be translated to job finished, & even in our case where the lab manager mostly understands, his boss rarely does & the accountant above him – not at all.

It gets worse than that – at the beginning of a project (RFQ) – the question is “how long will EMC take to fix?” with the expectation if a deterministic answer; the usual response of a snort of derision & how long is a piece of string generally translates to 2 weeks & once set in stone becomes a millstone (sorry mile-stone).

We already have a number of designs that while not intentionally using dithering, do use boundary mode PFC circuits which automatically force the switch frequency to vary over the mains cycle. These may become problematic at some future variation of the wording of the EMC specs.

While I have a great deal of sympathy for the design it right first time approach, the bottom line for any company is – it meets the requirement (today) – sell it!!

AQ: PMBLDC motor in MagNet

You can build it all in MagNet using the circuit position controlled switch. You will have to use motion analysis in order to use the position controlled switches. You can also use the back EMF information to find what the optimal position for the rotor should be with respect to the stator field. The nice thing about motion is that even if you do not have the rotor in the proper position you can set the reference at start up.

Another way of determining that position is to find the maximum torque with constant current (with the right phase relationship between phases of course) and plot torque as a function of rotor position. The peak will correspond to the back EMF waveform information.

If you want to examine the behavior of the motor with an inverter then another approach works very well. There are 2 approaches you can use with MagNet: 1) co-simulation, and, 2) reduced order models. The former can be used with matlab with Simulink or Simpower Systems and runs both Matlab and MagNet simultaneously. The module linking the two systems allows 2 way communication between the modules hence sharing information. The latter requires that you get the System Model Generator (SMG) from Infolytica. The SMG will create a reduced order model of you motor which can then be used in Matlab/Simulink or any VHDL-AMS capable system simulator. A block to interpret the data file is required and is available when you get the SMG. Reduced order models are very interesting since they can very accurately simulate the motor and hook up to complex control circuits.

AQ: Sensorless control

I am curious about the definition of “sensorless control”.  When you talk about sensorless control, are you in fact meaning a lack of physical position sensor such as e.g. a magnet plus vane plus hall effect? i.e. not having a unit whose sole objective is position detection.
Is the sensorless control based around alternative methods of measurement or detection to predict position using components that have to exist for the machine to function (such as measuring or detecting voltages or currents in the windings)?

I had long ago wondered about designing a motor, fully measuring its voltage and current profiles and phase firing timings for normal operation (from stationary to full speed full load) using a position sensor for getting the motor to work and to determine the best required phase firing sequences and associated voltage/current profiles then program a microprocessor to replicate the entire required profile such that I would attempt to eliminate the need for any sensing or measurement at all (but I concluded it would come very unstuck for any fault conditions or restarting while it was still turning). So in my mind don’t all such machines require a form of measurement (i.e. some form of “sensing”) to work properly so could never be truly sensorless?

A completely sensor-less control would be completely open-loop, which isn’t reliable with some motors like PMSMs. Even if you knew the switching instants for one ideal case, too many “random” variables could influence the system (just think of the initial position), so that those firing instants could be inappropriate for other situations.

Actually, induction machines, thanks to their inherent stability properties, can be run really sensor-less (i.e. just connected to the grid or in V/f). To be honest, even in the simple grid-connection case there is an overcurrent detection somewhere in the grid, which requires some sensing.

But there can also be said the term sensorless relates to el. motor itself. In another words, it means there are not any sensors “attached” to the el. motor (which does not mean sensors cannot be in the inverter, in such a case). In our company we are using the second meaning, since it indicates no sensor connections are needed between the el. motor and the ECU (inverter).

AQ: Motor design

When I was doing my PhD in motor design of reluctance machines with flux assistance (switched reluctance machines and flux switching machines with magnets and/or permanently energised coils) my supervisor was doing research on the field of sensorless control (it wasn’t the area of my research but it got me thinking about it all). At the time I had thought (only in my head as a PhD student daydream) that I would have to initially force a phase (or phases) to deliberately set the rotor into a known position due to the phase firing then start a normal phase firing sequences to start and operate the motor for a normal load without the need for any form position detection (all this was assuming I had the motor running from stationary to full speed at normal expected load with use of a position sensor to start with so I could link phase firing, rotor position and timings all together to create a “map” which I could then try to use to re-program a firing sequence with no position detection at all but only if I could force the rotor to “park” itself in the same position every time before starting the machine properly – the “map” having the information to assume that the motor changes speed correctly as it changes the firing sequences as it accelerates to full speed). But any problem such as unusual load condition or fault condition (e.g. short circuit or open circuit in a phase winding) would render useless such an attempt at control with no form of position detection at all. The induction machine being sensorless and on the grid being measured.

AQ: Active power losses in electrical motor

Equivalent active power losses during electrical motor’s testing in no-load conditions contain next losses:
1. active power losses in the copper of stator’s winding which are in direct relation with square of no-load current value: Pcus=3*Rs*I0s*I0s,

2. active power losses in ferromagnetic core which are in direct relation with frequency and degree of magnetic induction (which depends of voltage):
a) active power losses caused by eddy currents: Pec=kec*f*(B)x
b) active power losses caused by hysteresis: Ph=(kh*d*d*f*f*B*B)/ρ

3. mechanical power losses which are in direct relation with square of angular speed value: Pmech=Kmech*ωmech*ωmech,

Comment:
First, as you can see, active power losses in ferromagnetic core of electrical motor depend of voltage value and frequency, so by increasing voltage value you will get higher active power losses in ferromagnetic core of electrical motor.

Second, you can’t compare two electrical motors with different rated voltage and different rated power because active power losses in the ferromagnetic core, as I have already said above, depend of voltage value and frequency while active power losses in the copper of stator’s windings depend of square of no-load current value which is different for electrical motors with different rated power.

Third, when you want to compare active power losses in no-load conditions of two electrical motors with same rated voltage and rated power, you need to check design of both electrical motors because it is possible that one of them has different kind of winding, because, maybe in the past, one of them was damaged, so its windings had to be changed, what could be the reason for different electrical design and that has a consequence different no-load current value.

AQ: Induction machines testing

Case: We got by testing 3 different machines under no-load condition.
The 50 HP and 3 HP are the ones which behave abnormally when we apply 10% overvoltage. The third machine (7.5 HP) is a machine that reacts normally under the same condition.
What we mean by abnormal behavior is the input power of the machine that will increase dramatically under only 10% overvoltage which is not the case with most of the induction machines. This can be seen by the numbers given below.

50 HP, 575V
Under 10% overvoltage:
Friction & Windage Losses increase 0.2%
Core loss increases 102%
Stator Copper Loss increases 107%

3 HP, 208V
Under 10% overvoltage:
Friction & Windage Losses increase 8%
Core loss increases 34%
Stator Copper Loss increases 63%

7.5 HP, 460V
Under 10% overvoltage:
Friction & Windage Losses decrease 1%
Core loss increases 22%
Stator Copper Loss increases 31%

Till now, we couldn’t diagnose the exact reason that pushes those two machines to behave in such way.
Answer: A few other things I have not seen (yet) include the following:
1) Are the measurements of voltage and current being made by “true RMS” devices or not?
2) Actual measurements for both current and voltage should be taken simultaneously (with a “true RMS” device) for all phases.
3) Measurements of voltage and current should be taken at the motor terminals, not at the drive output.
4) Measurement of output waveform frequency (for each phase), and actual rotational speed of the motor shaft.

These should all be done at each point on the curve.

The reason for looking at the phase relationships of voltage and current is to ensure the incoming power is balanced. Even a small voltage imbalance (say, 3 percent) may result in a significant current imbalance (often 10 percent or more). This unbalanced supply will lead to increased (or at least unexpected) losses, even at relatively light loads. Also – the unbalance is more obvious at lightly loaded conditions.

As noted above, friction and windage losses are speed dependent: the “approximate” relationship is against square of speed.

Things to note about how the machine should perform under normal circumstances:
1. The flux densities in the magnetic circuit are going to increase proportionally with the voltage. This means +10% volts means +10% flux. However, the magnetizing current requirement varies more like the square of the voltage (+10% volt >> +18-20% mag amps).
2. Stator core loss is proportional to the square of the voltage (+10% V >> +20-25% kW).
3. Stator copper loss is proportional to the square of the current (+10% V >> +40-50% kW).
4. Rotor copper loss is independent of voltage change (+10% V >> +0 kW).
5. Assuming speed remains constant, friction and windage are unaffected (+10% V >> +0 kW). Note that with a change of 10% volts, it is highly likely that the speed WILL actually change!
6. Stator eddy loss is proportional to square of voltage (+10% V >> +20-25% kW). Note that stator eddy loss is often included as part of the “stray” calculation under IEEE 112. The other portions of the “stray” value are relatively independent of voltage.

Looking at your test results it would appear that the 50 HP machine is:
a) very highly saturated
b) has damaged/shorted laminations
c) has a different grade of electrical steel (compared to the other ratings)
d) has damaged stator windings (possibly from operation on the drive, particularly if it has a very high dv/dt and/or high common-mode voltage characteristic)
e) a combination of any/all of the above.

One last question – are all the machines rated for the same operating speed (measured in RPM

AQ: How generator designers determine the power factor?

The generator designers will have to determine the winding cross section area and specific current/mm2 to satisfy the required current, and they will have to determine the required total flux and flux variation per unit of time per winding to satisfy the voltage requirement. Then they will have to determine how the primary flux source will be generated (excitation), and how any required mechanical power can be transmitted into the electro-mechanical system, with the appropriate speed for the required frequency.
In all the above, we can have parallel paths of current, as well as of flux, in all sorts of combinations.

1) All ordinary AC power depends on electrical induction, which basically is flux variations through coils of wire. (In the stator windings).
2) Generator rotor current (also called excitation) is not directly related to Power Factor, but to the no-load voltage generated.
3) The reason for operating near unity Power Factor is rather that it gives the most power per ton of materials used in the generating system, and at the same time minimises the transmission losses.
4) Most Generating companies do charge larger users for MVAr, and for the private user, it is included in the tariff, based on some assumed average PF less than unity.
5) In some situations, synchronous generators has been used simply as VAr compensators, with zero power factor. They are much simpler to control than static VAr compensators, can be varied continuously, and do not generate harmonics. Unfortunately they have higher maintenance cost.
6) When the torque from the prime mover exceeds a certain limit, it can cause pole slip. The limit when that happens depends on the available flux (from excitation current), and stator current (from/to the connected load).

AQ: High voltage power delivery

You already know from your engineering that higher voltages results to less operational losses for the same amount of power delivered. The bulk capacity of 3000MW has a great influence on the investment costs obviously, that determines the voltage level and the required number of parallel circuit. The need for higher voltage DC levels has become more feasible for bulk power projects (such as this one) especially when the transmission line is more than 1000 km long. So on the economics, investment for 800kV DC systems have been much lower since the 90’s. Aside from reduction of overall project costs, HVDC transmission lines at higher voltage levels require lesser right-of-way. Since you will be also requiring less towers as will see below, then you will also reduce the duration of the project (at least on the line).

Why DC not AC? From a technical point of view, there are no special obstacles against higher DC voltages. Maintaining stable transmission could be difficult over long AC transmission lines. The thermal loading capability is usually not decisive for long AC transmission lines due to limitations in the reactive power consumption. The power transmission capacity of HVDC lines is mainly limited by the maximum allowable conductor temperature in normal operation. However, the converter station cost is expensive and will offset the gain in reduced cost of the transmission line. Thus a short line is cheaper with ac transmission, while a longer line is cheaper with dc.
One criterion to be considered is the insulation performance which is determined by the overvoltage levels, the air clearances, the environmental conditions and the selection of insulators. The requirements on the insulation performance affect mainly the investment costs for the towers.

For the line insulation, air clearance requirements are more critical with EHVAC due to the nonlinear behavior of the switching overvoltage withstand. The air clearance requirement is a very important factor for the mechanical design of the tower. The mechanical load on the tower is considerably lower with HVDC due to less number of sub-conductors required to fulfill the corona noise limits. Corona rings will be always significantly smaller for DC than for AC due to the lack of capacitive voltage grading of DC insulators.

With EHVAC, the switching overvoltage level is the decisive parameter. Typical required air clearances at different system voltages for a range of switching overvoltage levels between 1.8 and 2.6 p.u. of the phase-to-ground peak voltage. With HVDC, the switching overvoltages are lower, in the range 1.6 to 1.8 p.u., and the air clearance is often determined by the required lightning performance of the line.