Category: Blog

AQ: Excitation system in generator

The excitation system requires a very small fraction of the total power being generated. If we could simply increase the excitation (a very small amount of power) and increase the generator’s real power output, the world’s energy problems would be solved, because we would have a perpetual motion machine.

In the case of a generator connected to a large grid, the generator will inject any desired amount of power into the grid if its prime-mover is fed the desired power (plus a small additional amount of power to take care of losses). This is true, regardless of the total load on the grid, because the generator’s output is an extremely small fraction of the total grid power, and it alone cannot make drastic changes to the grid’s frequency.

Normally, the load varies by a very small fraction of the total grid power. If the load increases, the frequency of the entire grid (including the generator in question) lowers a very small amount, generally less than one-hundredth of one Hz. The frequency slew (that is, the rate-of-change of frequency) is very low, because there is a massive amount of energy that is stored as the kinetic energy of the rotors of all of the generators. At this point, nothing needs to be done; the system simply runs a little faster or slower.

Over time, as the load changes a greater amount, the frequency moves further from the nominal frequency (50 Hz or 60 Hz). When the difference between the actual frequency and the nominal frequency becomes greater than about 0.01 Hz, action is taken to make changes to the output of the grid’s generators.

The specific action may be determined by the regulating authority (for instance a power pool in the US) and it is usually based on economics, subject to other constraints. If the load has increased (and the frequency is less than the nominal frequency), the generators that have the lowest incremental cost of power will be asked to increase their output, or if all generators are near their limits, new generators (with the lowest incremental cost) are asked come on line. It’s important to note that a generator’s limit is usually 80% or 90% of its rating. The 10% or 20% of unused capacity is the system’s “spinning reserve”, which is used to maintain grid stability for sudden, large power variations.

The same thing happens with a generator connected only to its load or a weak grid with just a few other generators. However, because there is relatively little kinetic energy stored in the rotors of the one or few generators, the change in frequency associated with a load change is much greater, so frequency variations are much greater and corrective actions may not be implemented before the frequency varies by more than a few Hz.

AQ: High starting torque, synchronous motor, induction motor or DC motor?

It depends on so much more than the simple requirements listed of high starting torque and variable speed. What kind of application are you using it for? Is it on an automobile (where you have DC already), a factory, and do you have the budget and/or space for a variable frequency drive. A synchronous servo motor gives great dynamic control and great starting torque per volume, but its speed range is limited (unless you’re field weakening by the back EMF). Servo-motors are also the most expensive due to their position sensors and more intelligent drives.

With a proper soft drive you can go with an induction motor, but it depends. if power is small you can go to step motor also. But dc series motor’s starting torque is high as expressed others.
DC Series motors have high starting torque but induction motors have wide range of speed control. So, If DC motor is used, then DC drives you can use, although it will be expensive and DC motors are tough to maintain than ac motors due to commutation Problem.

DC series motor would provide both the high starting torque and adjustable speed BUT beware that DC motors have high maintenance cost and also require AC-DC conversion. You could use other available options e.g. double wound induction motors etc, depending upon your requirements.

But today, there is no application where you cannot apply AC motors, asynchronous or synchronous. If the motor and the associated power electronics are correctly rated, you can have any starting torque you want.

The typical application of DC series motors was in locomotives. This technology has been replaced by AC motors since 20 years. The latest generation of high speed trains use synchronous, permanent magnet motors.

AQ: EMI & EMC

EMI/EMC is rather a subjective topic than theoretic, but we shall look at it with start from noise prevention then noise suppression.

Prevention or design in the solution is needed to concentrate on noise making part/component or its mechanism play in the circuit. These are referring to those part and circuit that directly involve in switching, like PFC mosfet and its driver, PFC diode, DC/DC switching mosfet and its driver, and its output diode, do not left out the magnetic part and layout design, bad design will cause ugly switching then give you headache in EMC problem.

Part/component and topology selection is somehow important in which had some level predetermine your EMI/EMS need to take care, like what Stephen had explained; phase-shift is better than none phase shift.

Mosfet would have higher noise at high frequency but it can be somehow compensated or tolerated or trick by driving speed, by using snubber and may be shielding. The output Diode should be carefully selected so that its high frequency noise is within your output noise spec else is an issue, please make sure this noise is not able to be transmitted out as Radiated noise, or it is not couple into your Primary circuit, else it will all the way out to the input AC then transmitted as Radiated noise. Trr is the parameter to look at, sure the lower the best. Anyhow, some snubber (RC or feerite bead,..) shall be determined and add-in.

Noise suppression is what refer to Filter, energy dumping circuit,.. but somehow is basic need, one of it Input filter that give noise isolation between what generated internal in psu not pass to input supply system that could interfere other system/supply environment (EMI), or what noise environment that could enter into your power supply and interfere your power supply (EMS). Input filter is definitely a must for your switching frequency and its sub-harmonics, which is fall into the EMI standard range.

There are many technique to suppress the noise and is depend on what location, nature of the circuit, switch and diode, like what you means by RCD, is not mistaken is refer to RCD that add across the main transformer, DC/DC switcher and typically at the output Diode, you are on right direction with using this snubber around these component.

Shielding may be needed for your main transformer if you have some gap in it (but is not needed if your controller and your switcher is so call good part), but may be needed after you have made some study it the samples.

Good layout always give peaceful mind, whereby noise part have to be some distant away from noise sensitive controller or decision making circuit, decision making connection point have to wise at right termination point that prevent sense the high noise content signal, but if no choice some RC filter is unpreventable, anyhow and mostly RC is commonly located even is known clean in noise to those decision making circuit.

AQ: Phase rotation errors

Phase rotation errors are not as rare as they ought to be. I’ve seen more than one building with a systematic phase rotation error. This can be prevented by carefully following the color coding system (Yellow Orange Brown and Red Blue Black for 480 volt and 208 volt systems in the US for example) and tagging feeders at both ends to assure proper connections.

To check for proper phase rotation sequencing (ABC and not ACB) you can use a phase rotation meter. Without that you can bump a three phase motor that should be correctly connected to see if it turns in the right direction. If it’s wrong, reverse any two phase wires from the source to the distribution equipment. However, if you have a tie breaker and intend to operate the secondaries of two transformers in parallel by closing it that is not good enough. Both transformer distribution networks have to be connected correctly on all three phases. You have to check the voltage across each corresponding pair of terminals on the tie breaker and be certain they are all about zero volts. If you don’t and there is an error, closing the tie breaker if that is possible at all (some electronic breakers may lock you out) will result in a phase to phase bolted fault that can result in severe damage to your distribution equipment. Phase rotation errors are invariably the result of incompetent installation, inadequate specifications for feeder identification, and inadequate inspection.

There are times when the phase rotation error is made on the primary side of the transformer. If this happens it can be compensated for by reversing the phase rotation error from the secondary side. This is less desirable but it will work. If you have multiple phase rotation errors in the same distribution network you have quite a mess to clean up. It will be time consuming and expensive tracking all of them down to be certain you have eliminated them. False economies by cutting corners on the initial installation of substations and distribution equipment will result in necessitating very expensive and inconvenient repairs. If it is not corrected you risk severe damage to three phase load equipment.

AQ: High AC current inductors

There are several issues at work here. For high AC current inductors, you want to have low core losses, low proximity loss in the windings, and low fringing effects.

At normal frequencies, ferrites are by far the lowest core loss, much better than MPP and other so called “low-loss” materials. So you would like to use them from this aspect.

A toroid gives the greatest winding surface for the magnetic material, letting you use the least number of layers and hence minimizing proximity loss. The toroid also has the advantage of putting all the windings on the outside of the structure, facilitating cooling. This is very important.

However, you can’t easily gap a toroid of ferrite, it’s very expensive.

Some aerospace applications actually cut the ferrite toroid into segments and reassemble them with several gaps to solve the problem. The multiple gaps keep fringing effects low. It might be nice if you could buy a set of toroidal segments so you don’t have to do the cutting because that is a big part of the cost. I don’t know if that is a reasonable thing to do, maybe someone can comment.

Once you go to MPP, the core loss goes up, but the distributed gap minimizes the fringing losses.

The MPP lets you run somewhat higher on current before saturation, but if you have high ac you can’t take full advantage of that due to the core losses.

All these tradeoffs (and quite a few more not mentioned for brevity) are the reason that so many different solutions exist.

AQ: Moving data around within memory of an individual PLC

The first question would have to be – why do want to do it? If the data already exists in one location that is accessible by all parts of the program, why are you going to use up more PLC memory with exactly the same data?

Well, there are a couple of candidate reasons. One might be recipe data. You have an area of memory with a set of stored recipes for different products, and at an appropriate moment you want to copy a specific recipe from the storage area to the working area. The first thing to be said about that is that if your recipes are at all complex and you have a requirement to have a significant number of different recipes, then PLC memory is probably not the right place to be storing them. The ultimate, these days, of course, is that recipes are created by techies on PCs away from the production area, in nice quite, comfortable labs or whatever, and are stored on a SQL server. Only the recipe for today’s actual production run gets transferred to the PLC. But there are some applications where there is a limited number of different recipes required and the recipes themselves are quite simple, when it can be reasonable to store the recipes in PLC memory.

A second reason for copying memory areas within the same PLC is for procedures, sub-routines or whatever. But again, these days, all PLC languages have some sort of in-built facility for procedures – what Rockwell uniquely call Add On Instructions, what everyone else calls UDFBs – user defined function blocks. In any case, the point is that these facilities usually make all that memory management stuff transparent to the programmer. You just configure the UDFB and call it as required. The compiler takes care of all the memory data moves for you.

Another reason for copying memory, actually related to the previous, is a technique much used by PLC programmers where they use an area of memory as a ‘scratch-pad’. So they will copy some unprocessed data to the scratchpad area, all of the operations performed on the data take place using the scratchpad, and at the end, they copy the processed data back again. Again, it is questionable how much this technique is actually required these days, I would suggest that it most cases, there probably is a better way using a UDFB. But I have seen some programmers who routinely include a scratchpad area within any UDFBs they define.

AQ: Creepage in thermal substations

The term creepage distance is specifically associated with porcelain insulators used in the Air Insulated substations. Insulator surface attracts dust, pollution (in industrial areas) and salt (along the sea coast) and these form a conducting layer on the surface of the insulator body when the surface is wet. As long as this surface is dry, there is not much problem. But when it becomes wet during early morning or during winter season the outer surface forms a conducting path along the surface from high voltage terminal to earthed metal fitting at the end of metallic structure and may lead to surface conduction and finally external flash-over. The insulators are provided with Sheds to limit the direct exposer to mist or dew. The protected area of the sheds will not allow formation of continuous conducting layer along the surface of insulator as the part of surface which is under the sheds may not become wet due to mist or dew and this part (length along the bottom surface) of the insulator surface is called protected creepage.

Measurement of corona inception and extinction voltages give a fair idea about the possible flashover even with protected creepage. But these will change under different levels of pollution.
This problem is not present with Composite insulators as the Silicone rubber sheds surface does not allow formation of continuous wet conducting layer as the surface of these insulators is Hydrophobic. Hence higher creepage is not considered for composite insulators.

However air density is also a limiting factor for deciding the creepage of insulators, necessitating higher creepage in case of higher altitudes.
You may have to assess the level of pollution and altitude of substation and select the creepage accordingly.
Medium pollution levels may be 25mm/kV
Very high pollution areas like on the sea coast and chemical and pharmaceutical industrial areas 31mm/kV where the insulators may become expensive alternatively periodic hot line washing is also another solution for cleaning of pollution on insulators.
In case of very high pollution levels GIS may be safe solution (if cost is not an issue)

Thermal substations where there are no electrostatic precipitators may also experience equipment failures due to pollution. Pressurized equipments like SF6 gas circuit breaker experienced external flash-overs during winter months in Northern India The utility was not accepting the theory of insulation failure due to pollution initially but they had to accept the cause of failure as pollution when they had similar failure in the consecutive year during the same winter months and they have resorted to hot line washing since then and there are no more such failures. Sometimes these deposits may not be seen glaringly but failure may happen.

AQ: Sub-transmission network

Q:
What factors determine current distribution between two 33kV feeders feeding a 33/11kV Substation within a sub-transmission network.

A:
Try the voltage divider rule. Take the R + X of each feeder (resistance and reactance) and find the Z. Remember that square root of R² + X² = Z. Now that you have the Z of each feeder, now find the Z of the two in parallel. To do this we have Z = 1/(1/Z1 +1/Z2). So, if Z1 = 2.16 ohms and Z2 = 1.67 ohms, then our Z of the two in parallel is 0.94 ohms. Now we pass the current of the entire substation between these two feeders. Let’s say that the current is 240 amps. Now we have 240Ax0.94ohms = 226 volts (IxZ=V). And since voltage divided by impedance gives us current (V/Z = I) we now take the voltage drop across the two feeders in parallel and divide each of the feeder impedances to get the separate feeder current. So we get 226V/2.16ohms = 105 amps (feeder 1) and 226V/1.67ohms = 135 amps (feeder 2). I have not tried this with your exact situation. Having different voltages from two different substations will change things, but at least this way you have a good start on the problem.

Since one end is tied together and the two other ends are from different substations, then you will have the classic voltage sending and receiving formula. Since the load is the one substation, then their will only be one power factor of the one load, so I would think this formula would apply: Es = Square Root of ((ErCosƟ + IR)² +(ErSinƟ +IX)²), which is square root of ((Receiving voltage times the cosine of the current phase angle plus current times resistance of the line)² + (Receiving voltage times Sine of the current phase angle plus current times reactance of the line)²). The voltage drop across each line would be VD=I(RcosƟ +XsinƟ) where R is the line resistance and X is the line reactance and the Ɵ is the phase angle of the load.

AQ: Power converter trend

The trend toward lower losses in power converters is not apparent in all of the applications of power converters. It is also not apparent that the power converter solution and its losses for a given market will be the same when it comes to losses. In terms of the market shift that you mention, Prof. the answer is probably that each market is becoming split into a lower efficiency and higher efficiency solution.

From my limited view the reason for this is the effort and time required to do the low loss development. The early developers of low loss converters are now ahead and those that were slower may never catch them. This gap is in a number of converter markets widening, with both higher loss and lower loss offerings continuing to be used and sold. This split is not apparent with different levels of development or geographically.

Some markets already have very efficient solutions, other markets not so efficient and others had high power loss solutions. The customers accepted these solutions. The path to lower loss converters is for some markets not yet clear and in some markets the requirement may never actually become real.

It does seem that there is a real case to make for any power converter market splitting in two as the opportunities presented by lowering the power loss are taken.

All low loss converters present significant challenges and are all somewhat esoteric.

For me power supply EMI control consists of designing filtering for differential and common mode conducted emissions. The differential mode filtering attenuates the primary side differential lower frequency switching current fundamental & harmonic frequencies. The common mode filtering provides a low impedance return path for high frequency noise currents resulting from high dV/dt transitions during switching transitions present on the power semiconductors (switching mosfet drain, rectifier cathods). These noise currents ring at high frequencies as they oscillate in the uncontrolled parasitic inductance and capacitance associated with their return to source path. Shortening and damping this return path allows the high frequency noise currents to return locally instead of via the measurement copper bench and conducted emi current or voltage (LISN) probe as well as providing a more damped ringing frequency. Shorting this return path has the added benefit of decreasing radiated emissions. In addition proper layout of the power train so as to minimize the loop area associated with both the primary and secondary side switching currents minimizes the associated radiated emissions.

When I mentioned the criticism of resonant mode converter as related to the challenges of emi filitering I was referring to the additional differential mode filtering required. For example if you take a square wave primary side current waveform and analyze the differential frequency content the fundamental magnitude with be lower and there will be higher frequency components as compared to a purely resonant approach at the same power level. It is normally the lower frequency content that has to be filtered differentially.

Given these differences the additional emi filtering volume/cost of the resonant approach may pose a disadvantage.

AQ: flyback & boost applications

For flyback & boost applications, powder cores such as Kool-mu, Xmu, etc… are usually best performing and lowest cost. Even these may need to be gapped and if CCM operation is required, a “stepped-gap” is preferred to allow a large load compliance. Center stepped gaps reduce the fringe flux greatly as there is never a complete gap, only localized saturation. This permits the inductor’s value to “swing” more and accommodate the required operation.
With only the center leg with a gap, the outer copper band can be applied without significant loss.

To explore further, dissimilar core materials can be used in parallel, ferrite & powdered types, such that different materials provide function at different operating points within the same construction. Some decades ago, we had some high power projects that utilized fixed magnets within a ferrite’s gap to provide a flux bias offset for a forward topology.

Abe Pressman wasn’t big on exploring magnetic losses, however he operated at lower frequencies than are typical today. MPPs are great with large DC bias, but suffer high loss if AC swing is large and fast. Toroids also have the least efficient winding window, however, they are best to mitigate emi.