Category: Blog

AQ: Home automation concept

The concept of home automation on a global scale is a good concept. How to implement such a technology on a global scale is an interesting problem, or I should say issues to be resolved. Before global approval can be accomplished the product of home automation may need a strategy that starts with a look at companies that have succeeded in getting global approval of their products.

If we look at what companies that have the most products distributed around the world we see that Intel is one of these companies. What’s interesting is that this company has used automation in their Fabs for decades. This automation has allowed them to produce their products faster and cheaper than the rest of the industry. The company continues to invest in automation and the ability to evolve with technology and management. We have many companies that compete on the world stage; I don’t think many of these companies distribute as much product. So to compete at a level to make home automation accepted and to accomplish global acceptance the industry and the factories have to evolve to compete. That mission by the automation can be accomplished by adapting a strategy that updates their automation in their factories, stop using products that were used and developed in the 1970s (another way of saying COTS) and progress to current and new systems. A ten years old Factory may be considered obsolete if the equipment inside is as old as the factory.

Now for cost, when I thank of PLC or commercial controllers I see a COTS product that may be using obsolete parts that are not in production any more or old boards. So I see higher cost for manufacturing, a reduction in reliability. Now many procurement people evaluate risk in such a way that may rate older boards lower in risk for the short term, not a good evaluation for the long term. The cost is a function of how much product can be produced at the lowest cost and how efficient and competitive the company that produces the product. So time is money. The responsibility for cost is the company and the ability to produce a competitive product, not the government.

Now into control systems and safety, if the automation system is used in the house safety has to be a major consideration. I know at Intel Fabs if you violate any safety rule you won’t be working at that company long. To address safety the product must conform to the appropriate standards. Safety should be a selling point for home automation. Automation engineers should get and remember safety is one of the main considerations for an engineer. If someone gets hurt or killed because of a safety issue the first person looked at is the engineer.

Now 30% energy saving in my book is not enough, 35 to 40 percent should be a goal. Now solar cells have improved but the most efficient in the south west US. The Sterling engines are 1960 designs and use rare gases such as helium which may not be a renewable resource, Wind generators need space and are electromechanical so reliability and maintenance needs improving.

Now on to the interface standards, most modern factories that produce processors use the Generic equipment Manufacture standard, good deal works. As far as what and when to uses a standard interface, on BOX produced by one company may use RE-422 where another company may use RS 485 so the system engineer should resolve these issues before detailed design starts. Check with IEEE. Or you may be able to find the spec at every spec.com this is a good place to look for some of the specs needed.

So I conclude, many issues exist, and when broken down home automation is viable and needs a concerted effort and commitment from at least the companies and management that produce products for automation and a different model for manufacturing and growing the home systems.
Home automation with a focus on energy savings as a goal is a good thing. We have a lot of work to ma

AQ: Signal processing and communications theory

Coming from a signal processing and communications theory background, but with some experience in power design, I can’t resist the urge to chime in with a few remarks.

There are many engineering methods to deal with sources of interference, including noise from switching converters, and spread spectrum techniques are simply one more tool that may be applied to achieve a desired level of performance.

Spread spectrum techniques will indeed allow a quasi-peak EMC test to be passed when it might otherwise be failed. Is this an appropriate application for this technique?

The quasi-peak detector was developed with the intention to provide a benchmark for determining the psycho-acoustic “annoyance” of an interference on analog communications systems (more specifically, predominantly narrow band AM type communication systems). Spread spectrum techniques resulting in a reduced QP detector reading will almost undoubtedly reduce the annoyance the interference would have otherwise presented to the listener. Thus the intent was to reduce the degree of objectionable interference and the application of spread spectrum meets that goal. This doesn’t seem at all like “cheating” to me; the proper intent of the regulatory limit is still being met.

On the other hand, as earlier posters have pointed out, the application of spectrum spreading does nothing to reduce the total power of the interference but simply spreads it over a wider bandwidth. Spreading the noise over a wider bandwidth provides two potential benefits. The most obvious benefit occurs if the victim of the interference is inherently narrowband. Spreading the spectrum of the interference beyond the victim bandwidth provides an inherent improvement in signal to noise ratio. A second, perhaps less obvious, benefit is that the interference becomes more noise like in its statistics. Noise like interference is less objectionable to the human ear than impulsive noise but it should also be recognized that it is less objectionable to many digital transmission systems too.

However, from an information theoretic perspective the nature of the interference doesn’t matter, but rather only the signal to noise ratio matters. Many modern communication systems employ wide bandwidths. Furthermore they employ powerful adaptive modulation and coding schemes that will effectively de-correlate interference sources (makes the effect noise like); these receivers don’t care whether the interference is narrow band or wide band in terms of bit error rate (BER) and they will be effected largely the same by a given amount of interference power (in theory identically the same, but implementation limitations still provide some gap to the theoretical limits).

It is worth noting however that while spectrum spreading techniques do not reduce the interference power they don’t make it any worse either. Thus these techniques may (I would argue legitimately as per above) help with passing a test which specified the CISPR Quasi-Peak detector and should not make the performance on a test specifying the newer CISPR RMS+Average test any worse.

It should always be an engineering goal to keep interference to a reasonable minimum and I would agree that it is aesthetically most satisfying (and often cheapest and most simple) to achieve this objective by somehow reducing the interference at source (this is a wide definition covering aspects of SMPS design from topology selection to PCB layout and beyond). However, the objective to control noise at the source shouldn’t eliminate alternative methods from consideration in any given application.

There will always be the question of how good is good enough and it is the job of various regulatory bodies to define these requirements and to do so robustly enough such that the compliance tests can’t be “gamed”.

AQ: Friendly system without technicians diagnose

How to make our systems so friendly that they do not need technicians to help diagnose problems? Most of the more obvious answers have been well documented here but to emphasize these points it is normally the case that diagnostics and alarms can dominate the amount of code constructed for an application. That is the amount of code required to fully diagnose an application may be as much if not more than the code required to make the application work in the first place!

I have seen HMIs with diagnostic screens showing an animated version of the Cause & Effects code that allows users to see where the trip condition is. I have also seen screens depicting prestart checks, Operator guides, etc all animated to help the user. Allen-Bradley even have a program viewer that can be embedded on an HMI screen but you would probably need a technician to understand the code.

From a control system problem perspective it is inevitable that you would need to use the vendor based diagnostics to troubleshoot your system. Alarms may indicate that there is a system related problem but it is unlikely that you could build a true diagnostics application in your code to indicate the full spectrum of problems you may encounter. For example if the CPU dies or the memory fails there is nothing left to tell you what the problem is 🙂

From an application/process problem perspective if you have to resort to a technician to go into the code to determine your problem then your alarm and diagnostics code is not comprehensive enough! Remember your alarming guidelines an Operator must be able to deal with an abnormal situation within a given time frame, if the Operator has to rely on a Technician to wade through code then you really do need to perform an alarm review and build a better diagnostics system.

AQ: OPC drivers advantage

A few years back, I had a devil of time getting some OPC Modbus TCP drivers to work with Modbus RTU to TCP converts. The OPC drivers could not handle the 5 digit RTU addressing. You need to make sure your OPC driver that you try actually works with your equipment. Try before you buy is definite here. Along with some of the complications, like dropping connections due minor network cliches, a real headache and worth a topic all its own, is the ability us tag pickers and the like. The best thing to happen to I/O addressing is the use of Data Objects in the PLC and HMI/SCADA. The other advantage OPC can give you the ability to get more Quality Information on your I/O. Again, check before you buy. In my experience, the only protocol worse than Modbus in the Quality Info department is DDE and that pretty well gone. This still does not help when the Modbus slave still reports stale data like its fresh. No I/O driver can sort that out, you need a heartbeat.

A shout out to all you Equipment manufactures that putting Modbus RTU into equipment because its easy, PLEASE BUILD IN A HEATBEAT us integrators can monitor so we can be sure the data is alive and well.

Also, while you try before you buy, you want your HMI/SCADA to be able to tell the difference between, Good Read, No Read and Bad Read, particularly with a RTU network.

AQ: Self Excited Induction Generator (SEIG)

The output voltage and frequency of a self excited induction generator (SEIG) are totally dependent on the system to which it is attached.

The fact that it is self-excited means that there is no field control and therefore no voltage control, instead the residual magnetism in the rotor is used in conjunction with carefully chosen capacitors at its terminal to form a resonant condition that mutually assists the buildup of voltage limited by the saturation characteristics of the stator. Once this balance point is reached any normal load will cause the terminal voltage to drop.

The frequency is totally reliant upon the speed of the rotor, so unless there is a fixed speed or governor controlled prime mover the load will see a frequency that changes with the prime mover and drops off as the load increases.

The above characteristics are what make SEIGs less than desirable for isolated/standalone operation IF steady well regulated AC power is required. On the other hand if the output is going to be rectified into DC then it can be used. Many of these undesirable “features” go away if the generator is attached to the grid which supplies steady voltage and frequency signals.

The way around all the disadvantages is to use a doubly fed induction generator (DFIG). In addition to the stator connection to the load, the wound rotor is provided with a varying AC field whose frequency is tightly controlled through smart electronics so that a relatively fixed controllable output voltage and frequency can be achieved despite the varying speed of the prime mover and the load, however the costs for the wound rotor induction motor plus the sophisticated control/power electronics are much higher than other forms of variable speed/voltage generation.

AQ: How/where do we as engineers need to change?

System Design – A well designed system should provide clear and concise system status indications. Back in the 70’s (yes, I am that old), Alarm and indicator panels provided this information in the control room. Device level indicators further guided the technician to solving the problem. Today, these functions are implemented in a control room and machine HMI interface. Through the use of input sensor and output actuator feedback, correct system operation can be verified on every scan.

Program (software) Design – It has been estimated that a well written program is 40% algorithm and 60% error checking and parameter verification. “Ladder” in not an issue. Process and machine control systems today are programmed in ladder, structured text, function block, etc. The control program is typically considered intellectual property (IP) and in many cases “hidden” from view. This makes digging through the code impractical.

How/where do we as engineers need to change? – The industry as a whole needs to enforce better system design and performance. This initiative will come from the clients, and implemented by the developers. The cost/benefit trade-off will always be present. Developers trying to improve their margins (reduce cost – raise price) and customers raising functionality and willing to pay less. “We as engineers” are caught in the middle, trying to find better ways to achieve the seemingly impossible.

AQ: High voltage power delivery

You already know from your engineering that higher voltages results to less operational losses for the same amount of power delivered. The bulk capacity of 3000MW has a great influence on the investment costs obviously, that determines the voltage level and the required number of parallel circuit. The need for higher voltage DC levels has become more feasible for bulk power projects (such as this one) especially when the transmission line is more than 1000 km long. So on the economics, investment for 800kV DC systems have been much lower since the 90’s. Aside from reduction of overall project costs, HVDC transmission lines at higher voltage levels require lesser right-of-way. Since you will be also requiring less towers as will see below, then you will also reduce the duration of the project (at least on the line).

Why DC not AC? From a technical point of view, there are no special obstacles against higher DC voltages. Maintaining stable transmission could be difficult over long AC transmission lines. The thermal loading capability is usually not decisive for long AC transmission lines due to limitations in the reactive power consumption. The power transmission capacity of HVDC lines is mainly limited by the maximum allowable conductor temperature in normal operation. However, the converter station cost is expensive and will offset the gain in reduced cost of the transmission line. Thus a short line is cheaper with ac transmission, while a longer line is cheaper with dc.
One criterion to be considered is the insulation performance which is determined by the overvoltage levels, the air clearances, the environmental conditions and the selection of insulators. The requirements on the insulation performance affect mainly the investment costs for the towers.

For the line insulation, air clearance requirements are more critical with EHVAC due to the nonlinear behavior of the switching overvoltage withstand. The air clearance requirement is a very important factor for the mechanical design of the tower. The mechanical load on the tower is considerably lower with HVDC due to less number of sub-conductors required to fulfill the corona noise limits. Corona rings will be always significantly smaller for DC than for AC due to the lack of capacitive voltage grading of DC insulators.

With EHVAC, the switching overvoltage level is the decisive parameter. Typical required air clearances at different system voltages for a range of switching overvoltage levels between 1.8 and 2.6 p.u. of the phase-to-ground peak voltage. With HVDC, the switching overvoltages are lower, in the range 1.6 to 1.8 p.u., and the air clearance is often determined by the required lightning performance of the line.

AQ: Hazardous area classification

Hazardous area classification has three basic components:
Class (1,2) : Type of combustible material (Gas or Dust)
Div (I, II) : Probability of combustible material being present
Gas Group (A,B,C,D): most combustible to least combustible (amount of energy required to ignite the gas)

Hazardous Area Protection Techniques: There are many, but most commonly used for Instrumentation are listed below:
1) Instrinsic Safety : Limits the amount of energy going to the field instrument (by use of Instrinsic Safety Barrier in the safe area). Live maintenance is possible. Limited for low energy instruments.
2) Explosion proof: Special enclosure of field instrument that contains the explosion (if it occurs). Used for relatively high energy instruments; Instrument should be powered off before opening the enclosure.
3) Pressurized or Purged: Isolates the instrument from combustible gas by pressurizing the enclosure with an inert gas.

Then there are encapsulation, increased safety, oil immersion, sand filling etc.

Weather protection: Every field instrument needs protection from dust and water.
IP-xy as per IEC 60529, where
x- protection against solids
y- protection against liquids
Usually IP-65 protection is specified for field instruments i onshore applications (which is equivalent of NEMA 4X); IP-66 for offshore application and IP-67 for submersible service.

AQ: What is true power and apparent power?

KW is true power and KVA is apparent power. In per unit calculations the more predominantly used base, which I consider standard is the KVA, the apparent power because the magnitude of the real power (KW) is variable / dependent on a changing parameter of the cos of the angle of displacement (power factor) between the voltage and current. Also significant consideration is that the rating of transformers are based in KVA, the short circuit magnitudes are expressed in KVA or MVA, and the short circuit duty of equipment are also expressed in MVA (and thousands of amperes, KA ).

In per unit analysis, the base values are always base voltage in kV and base power in kVA or
MVA. Base impedance is derived by the formula (base kV)^2/(base MVA).

The base values for the per unit system are inter-related. The major objective of the per unit system is to try to create a one-line diagram of the system that has no transformers (transformer ratios) or, at least, minimize their number. To achieve that objective, the base values are selected in a very specific way:
a) we pick a common base for power (I’ll come back to this point, if it should be MVA or MW);
b) then we pick base values for the voltages following the transformer ratios. Say you have a generator with nominal voltage 13.8 kV and a step-up transformer rated 13.8/138 kV. The “easiest” choice is to pick 13.8 kV as the base voltage for the LV side of the transformer and 138 kV as the base voltage for the HV side of the transformer.
c) once you have selected a base value for power and a base value for voltage, the base values for current and impedance are defined (calculated). You do not have a degree of freedom in picking base values for current and impedance.

Typically, we calculate the base value for current as Sbase / ( sqrt(3) Vbase ), right? If you are using that expression for the base value for currents, you are implicitly saying that Sbase is a three-phase apparent power (MVA) and Vbase is a line-to-line voltage. Same thing for the expression for base impedance given above. So, perhaps you could choose a kW or MW base value. But then you have a problem: how to calculate base currents and base impedances? If you use the expressions above for base current and base impedance, you are implicitly saying that the number you picked for base power (even if you picked a number you think is a MW) is actually the base value for apparent power, it is kVA or MVA. If you insist on being different and really using kW or MW as the base for power, you have to come up with new (adjusted) expressions for calculating base current and base impedance.

And, surprise!, you will find out that you need to define a “base power factor” to do so. In other words, you will be forced back into defining a base apparent power. So, no, you cannot (easily) use a kW/MW base. For example, a 100 MVA generator, rated 0.80 power factor (80 MW). You could pick 80 as the base power (instead of 100). But if you are using the expressions above for base current and base impedance, you are actually saying that the base apparent power is 80 MVA (not a base active power of 80 MW).

AQ: How generator designers determine the power factor?

The generator designers will have to determine the winding cross section area and specific current/mm2 to satisfy the required current, and they will have to determine the required total flux and flux variation per unit of time per winding to satisfy the voltage requirement. Then they will have to determine how the primary flux source will be generated (excitation), and how any required mechanical power can be transmitted into the electro-mechanical system, with the appropriate speed for the required frequency.
In all the above, we can have parallel paths of current, as well as of flux, in all sorts of combinations.

1) All ordinary AC power depends on electrical induction, which basically is flux variations through coils of wire. (In the stator windings).
2) Generator rotor current (also called excitation) is not directly related to Power Factor, but to the no-load voltage generated.
3) The reason for operating near unity Power Factor is rather that it gives the most power per ton of materials used in the generating system, and at the same time minimises the transmission losses.
4) Most Generating companies do charge larger users for MVAr, and for the private user, it is included in the tariff, based on some assumed average PF less than unity.
5) In some situations, synchronous generators has been used simply as VAr compensators, with zero power factor. They are much simpler to control than static VAr compensators, can be varied continuously, and do not generate harmonics. Unfortunately they have higher maintenance cost.
6) When the torque from the prime mover exceeds a certain limit, it can cause pole slip. The limit when that happens depends on the available flux (from excitation current), and stator current (from/to the connected load).