You are given a robot as shown in Figure 1 with dimensions as shown. We observe it from an inertial frame denoted by IX/Y. Let us assume = 2m and d = lm. For each question, the robot starts with pose P=[4.0, 2.0, 30°]T in the inertial frame. Further, you set the robot velocities at the start, and keep it unchanged for each question. Finally, the robot can drive its wheels between zero and one radians per second.
Question 1.1: Assume for this question that the robot is only capable of driving both wheels at the same angular velocity. Assuming we drive the robot as fast as we can, what is its position in the inertial frame after five seconds’?
Question 1.2: We will now relax the condition that both wheels spin at the same angular velocity. What should the angular velocities of the individual wheels be set to, to reach[10, 5.5] in the inertial frame? What is the robot’s final orientation in the inertial frame?
Question 1.3: The objective for this question is to turn the robot to face the origin. What is the fastest way to drive the robot to make it face its origin? How long does this take? What is the final pose of the robot in the inertial frame?
Question2.1: For each of the following sensors, identify if they are active, passive and exteroceptive,proprioceptive
- IR !notion sensor
- Inertial measurement unit
- bump sensors
- push button switch
Question 2.2: You are given a 1-D image I=[1, 2, 3, 2, 4, 8, 12, 16, 16, 11, 7, 5, 23, 8, 18. 12. 5, 12, 21. 16]
- Apply a uniform smoothing filter of size 3 to the image and write down the smoothed Don’t worry about the boundaries.
- Now apply a local filter of size 3 that weights the current pixel twice as much as the previous and the next pixels
- In your own words, describe when you would use a uniform filter vs. a weighted filter?
Question 2.3: What are separable filters? How do they help computational efficiency?
Question 2.4: What is a computationally efficient way of distinguishing uniform regions and edges from corners? Explain how it helps differentiate corners.
Question 2.5: We learned in class that our feature detector algorithm should ideally be robust to rotation, scaling, and change in viewer perspective. Write down two sentences on how the SIFT feature detector is robust to the above transforms?