Robot programming game java




















The step function is executed in a loop so that robot. The same concepts apply to the encoders. First, our robot will have a very simple model. It will make many assumptions about the world. Some of the important ones include:. Although most of these assumptions are reasonable inside a house-like environment, round obstacles could be present. Our obstacle avoidance software has a simple implementation and follows the border of obstacles in order to go around them.

We will hint readers on how to improve the control framework of our robot with an additional check to avoid circular obstacles. We will now enter into the core of our control software and explain the behaviors that we want to program inside the robot.

Additional behaviors can be added to this framework, and you should try your own ideas after you finish reading! A robot is a dynamic system.

The state of the robot, the readings of its sensors, and the effects of its control signals are in constant flux. Controlling the way events play out involves the following three steps:. These steps are repeated over and over until we have achieved our goal. The more times we can do this per second, the finer control we will have over the system.

The Sobot Rimulator robot repeats these steps 20 times per second 20 Hz , but many robots must do this thousands or millions of times per second in order to have adequate control. Remember our previous introduction about different robot programming languages for different robotics systems and speed requirements. In general, each time our robot takes measurements with its sensors, it uses these measurements to update its internal estimate of the state of the world—for example, the distance from its goal.

It compares this state to a reference value of what it wants the state to be for the distance, it wants it to be zero , and calculates the error between the desired state and the actual state. Once this information is known, generating new control signals can be reduced to a problem of minimizing the error which will eventually move the robot towards the goal. To control the robot we want to program, we have to send a signal to the left wheel telling it how fast to turn, and a separate signal to the right wheel telling it how fast to turn.

However, constantly thinking in terms of v L and v R is very cumbersome. This is known as a unicycle model of control. Here is the Python code that implements the final transformation in supervisor.

Using its sensors, the robot must try to estimate the state of the environment as well as its own state. These estimates will never be perfect, but they must be fairly good because the robot will be basing all of its decisions on these estimations.

Using its proximity sensors and wheel tickers alone, it must try to guess the following:. The first two properties are determined by the proximity sensor readings and are fairly straightforward. We know ahead of time that the seventh reading, for example, corresponds to the sensor that points 75 degrees to the right of the robot.

Thus, if this value shows a reading corresponding to 0. If there is no obstacle, the sensor will return a reading of its maximum range of 0. Thus, if we read 0. Because of the way the infrared sensors work measuring infrared reflection , the numbers they return are a non-linear transformation of the actual distance detected.

Thus, the Python function for determining the distance indicated must convert these readings into meters. This is done in supervisor. Again, we have a specific sensor model in this Python robot framework, while in the real world, sensors come with accompanying software that should provide similar conversion functions from non-linear values to meters. Determining the position and heading of the robot together known as the pose in robotics programming is somewhat more challenging.

Our robot uses odometry to estimate its pose. This is where the wheel tickers come in. This is one reason it is important to iterate the control loop very frequently in a real-world robot, where the motors moving the wheels may not be perfect. If we waited too long to measure the wheel tickers, both wheels could have done quite a lot, and it will be impossible to estimate where we have ended up. Given our current software simulator, we can afford to run the odometry computation at 20 Hz—the same frequency as the controllers.

But it could be a good idea to have a separate Python thread running faster to catch smaller movements of the tickers. Below is the full odometry function in supervisor. Positive x is to the east and positive y is to the north. Thus a heading of 0 indicates that the robot is facing directly east. The robot always assumes its initial pose is 0, 0 , 0. So how do we make the wheels turn to get it there? This then becomes a simple task and can be easily programmed in Python. If we go forward while facing the goal, we will get there.

Thanks to our odometry, we know what our current coordinates and heading are. We also know what the coordinates of the goal are because they were pre-programmed. Thus, the angle of this vector from the X-axis is the difference between our heading and the heading we want to be on.

In other words, it is the error between our current state and what we want our current state to be. We want to minimize the error:. It is a coefficient which determines how fast we turn in proportion to how far away from the goal we are facing. If the error in our heading is 0 , then the turning rate is also 0. A good general rule of thumb is one you probably know instinctively: If we are not making a turn, we can go forward at full speed, and then the faster we are turning, the more we should slow down.

This generally helps us keep our system stable and acting within the bounds of our model. A suggestion to elaborate on this formula is to consider that we usually slow down when near the goal in order to reach it with zero speed. How would this formula change? OK, we have almost completed a single control loop. The only thing left to do is transform these two unicycle-model parameters into differential wheel speeds, and send the signals to the wheels.

As we can see, the vector to the goal is an effective reference for us to base our control calculations on. When an obstacle is encountered, turn away from it until it is no longer in front of us.

Accordingly, when there is no obstacle in front of us, we want our reference vector to simply point forward. However, as soon as we detect an obstacle with our proximity sensors, we want the reference vector to point in whatever direction is away from the obstacle. A neat way to generate our desired reference vector is by turning our nine proximity readings into vectors, and taking a weighted sum.

When there are no obstacles detected, the vectors will sum symmetrically, resulting in a reference vector that points straight ahead as desired. But if a sensor on, say, the right side picks up an obstacle, it will contribute a smaller vector to the sum, and the result will be a reference vector that is shifted towards the left.

The robot bounces around aimlessly, but it never collides with an obstacle, and even manages to navigate some very tight spaces:. Robot; import java. KeyEvent; import java. Skip to content. Change Language. Related Articles. Table of Contents. Improve Article. Save Article. Like Article. It opens. A gap of. Since: 1. Robot GraphicsDevice screen Creates a Robot for the given screen device. Color getPixelColor int x, int y Returns the color of a pixel at the given screen coordinates.

String toString Returns a string representation of this Robot. Methods inherited from class java. Throws: AWTException - if the platform configuration does not allow low-level input control.

This exception is always thrown when GraphicsEnvironment. Coordinates passed to Robot method calls like mouseMove and createScreenCapture will be interpreted as being in the same coordinate system as the specified screen. Note that depending on the platform configuration, multiple screens may either: share the same coordinate system to form a combined virtual screen use different coordinate systems to act as independent screens This constructor is meant for the latter case.

If screen devices are reconfigured such that the coordinate system is affected, the behavior of existing Robot objects is undefined. Parameters: screen - A screen GraphicsDevice indicating the coordinate system the Robot will operate in.

IllegalArgumentException - if screen is not a screen GraphicsDevice. Parameters: x - X position y - Y position mousePress public void mousePress int buttons Presses one or more mouse buttons. The mouse buttons should be released using the mouseRelease int method. Parameters: buttons - the Button mask; a combination of one or more mouse button masks.

It is allowed to use only a combination of valid values as a buttons parameter. A valid combination consists of InputEvent. The valid combination also depends on a Toolkit.



0コメント

  • 1000 / 1000