Feedback Control of Mechanical Systems
Lead Author(s): Nolan Tsuchiya
Student Price: Contact us to learn more
This book covers the fundamentals of feedback control of dynamic systems. Appropriate for upper-division engineering undergraduate students with some knowledge of dynamic systems.
The basic concept of feedback control:
As a very general description, Control Engineers use control theory to design systems with a desired behavior. Control theory can be thought of as a set of analytical tools (some of which we will study in this course) used to do analysis and design of control systems. A quintessential example of an everyday control system is the cruise control feature in your automobile. That is, there is a control module that compares your vehicle's current speed (the measurement) to your desired speed (the reference speed), then intelligently decides how much throttle to apply based on the difference (the error) between the two.
In this course, we will study feedback control systems as well as develop analytical tools that will enable us to do meaningful analysis and control design. The following figure is the fundamental unity feedback block diagram and will serve as the foundation for every analysis and design tool we will study throughout the course.
Let's define some of the basic elements of this block diagram by going around the feedback loop beginning with the plant P(s):
- P(s) is the plant. This box represents the dynamics of the real-life system you are trying to control. You studied this extensively in ME340, in which you learned how to generate a transfer function for a given system by using specific modeling techniques. Often, P(s) is referred to as the open-loop system. For the cruise control example, the plant is the automobile.
- y is the measurement. This is the measured output. For the cruise control example, y is the actual speed, measured by a sensor. In real systems, the measurement is always detected by some type of sensor that converts the real-world parameter into a numerical value.
- r is the reference, or desired output. This is the specified by the user of the system. For the cruise control example, it is the setpoint speed, set by the driver.
- e is the error, simply defined as e = r - y. This is a measure of how far away from the desired value the system output actually is. In other words, it's the difference between what we want and what we have. For the cruise control example, it's the difference between the desired speed and the actual speed.
- u is the control command. This is the input to the plant and ultimately what makes the system move. The control command is implemented by some type of actuator. In the cruise control example, the actuator is the engine of the vehicle, and applying the control command (throttle) translates to forward thrust in the vehicle.
- C(s) is the controller. Notice that the input to the controller is the error, which contains information about the current state of the system, i.e. y. The controller takes the error signal e as an input and automatically computes an appropriate control command u that automatically applies the correct amount of throttle to maintain the vehicle's speed at the desired setpoint.
You may have noticed that the descriptions above for each element of the unity feedback control diagram were pretty generalized. This was intentional because...
KEY POINT: The very same unity feedback control diagram shown in Fig. 1.1 applies to many, many, many different systems.
This point is one of the core concepts of this chapter. That is, the block diagram in Fig. 1 doesn't just apply to cruise control, it applies to vast range of real-world mechanical and electrical systems. Consider the hard disk drive in Table 1.1. The feedback control structure is applied to this system, with the plant being the rotational hard disk and the reference signal being some desired angular velocity. The actual rotational velocity must be as close as possible to the reference for proper function. It is the controller's job to constantly sample the error and decide whether to apply more or less voltage to the actuator (in this system, a motor).
For the Segway Scooter example, the controller uses the error between the reference angle and actual measured angle to decide how to apply voltage to the motors (actuators) which drive the wheels. The overall goal for this system is to keep the actual angle as close to zero as possible; this implies the system is balancing.
A third example is a heating, ventilation, air-conditioning (HVAC) system. Most modern buildings today are equipped with some form of HVAC system, and probably 100% of them are controlled by a feedback control system as in Fig. 1.1. For this system, the goal is to maintain a desired temperature in a given space by continually comparing the actual temperature to the desired temperature (error) and actuating the system accordingly. Typically, you can either fix the outside air temperature and vary the amount of cooling or heating air into the space, or you can fix the flow rate of air into the system and vary the temperature. Regardless of how the system is actuated, the objective and control structure are the same.
The one thing that is common to all of these vastly different systems is the structure of the control system. That is, every system constructs an error signal by feeding back the true measurement of the output and compares it to the desired value. Thus, this error contains information about the current state of the system, and the controller can subsequently make an intelligent decision on how to best drive the plant. We will spend the majority of this course learning how to design the controller itself, i.e. what to put in C(s), but for now, the conceptual understanding of the feedback control structure is critical.
KEY POINT: The error signal e(t) contains information about the current state of the system (via the feedback path y(t)). This information is passed into the controller C(s), which intelligently decides how to best drive the plant to achieve the overall goal. Then the cycle starts all over again.
Open-loop vs. Closed-loop (feedback) control:
Picture one more example to which the unity-feedback control structure applies. Imagine balancing a long wooden dowel vertically on the palm of your hand (see Fig. 1.2).
Try to relate this bio-mechanical system to the unity-feedback control diagram in Fig. 1.1 by answering the question below:
Identify the basic elements of the unity feedback control structure for the control system in Fig. 1.2. The objective of this system is to balance the stick vertically on your hand.
The plant, P(s)
The person's arm, hand, and muscles
The measurement, y(t)
The difference between the actual and desired angle, r(t)-y(t)
The sensor required to obtain the measurement
The person's ears
The reference signal, r(t)
The vertically oriented stick in a gravitational field
The error signal, e(t)
The desired angle of the stick from the vertical
The controller, C(s)
The person's eyes
The actuator which will drive the system to move based on the control command from the brain
The length of the stick
The actual angle of the stick from the vertical
The person's brain
Now that you have thought about how this system can be put into a feedback control framework, try this with any stick you have lying around. If you can get it to balance, great; you've stabilized an unstable system and you can consider it a closed-loop system.
Now, while you've got the stick balanced, try closing you eyes.
If you're like 99% of the population, the stick fell down. Let's see how this correlates to the block diagram.
When you closed your eyes, you removed the sensor from the system. Without a sensor, the measurement y(t) cannot be fed back, and the error signal no longer equals r(t)-y(t), in fact it just equals r(t) (see Fig. 1.3). This means that the controller (you brain in this case) has no information about the current state of the system (is the stick balanced?) and cannot intelligently decide on how to actuate the system (how should you move your hand?). In this scenario, the system is said to be running open-loop. That is, there is no feedback path.
Now that we have a sense for what open-loop and closed-loop systems are, let's discuss some of the pro's and con's of each:
As we saw in the stick balancing example, in an open-loop system there is no feedback path. This means that, in order to do any meaningful control design, we must know the plant model P(s) exactly. Often, in open-loop systems, the best method is to use a lookup table to decide how to drive the system; remember, there is no feedback path. The following lists some key facts about open-loop systems:
- If the plant P(s) is unstable to begin with, then open-loop control will never be able to stabilize the system.
- Guessing the control command u(t) is really the only option. Often an intelligent guess is made via a lookup table in systems where the plant is stable and relatively well-known.
Closed-loop systems, on the other hand, offer the engineer much more flexibility on the design of the system and are much more widely-used. The key benefit of a closed-loop system is that the controller C(s) always has information about the current state of the system, and can thus intelligently decide how to drive the system with an appropriate control command u(t). The following lists some key facts about closed-loop systems:
- A closed-loop system can stabilize and unstable plant P(s). We saw this with the stick balancing example. This is one of the primary advantages of using closed-loop control.
- Closing the loop around a plant can improve the performance of an already-stable system.
- The engineer can design the controller C(s) to emphasize different system performance characteristics, i.e. stability, tracking, robustness.
The benefits of using closed-loop feedback control are clear, and we will be exclusively studying feedback systems for the duration of this course.
Concept check questions:
In a closed-loop feedback control system (as in Fig. 1.1), the input to the controller C(s) is:
r(t), the reference, or desired output signal
y(t), the measurement
r(t) - y(t), the difference between the desired output and the true output
u(t), the control command
What signal is actually applied to the real-life system and makes it move?
y(t), the measurement
e(t), the error
u(t), the control command
r(t), the reference signal
How is the measurement y(t) obtained in real life?
From the plant P(s)
The controller C(s)
How is the control command u(t) physically applied to the real system?
The plant P(s)
The controller C(s)
In the stick balancing example, if you close your eyes, what element of the control loop have you removed?
The controller C(s)
The plant P(s)