Rigid Body Transformations
(Or How Different sensors see the same world)
F1/10th Racing
By, Paritosh Kelkar
Rigid Body Transformations (Or How Different sensors see the same - - PowerPoint PPT Presentation
F1/10 th Racing Rigid Body Transformations (Or How Different sensors see the same world) By, Paritosh Kelkar Mapping the surroundings Specify destination and generate path to goal The colored cells represent a potential that is used to
By, Paritosh Kelkar
Mapping the surroundings Specify destination and generate path to goal The colored cells represent a potential that is used to plan paths Rviz is used to specify different goals to the robot
Following the wall here blindly is going to be really hard You will need a very complicated Route definition file
Simultaneous Localization and Planning Planning
1. w.r.t = with respect to 2. – where are you w.r.t the map – co-oridnates from origin Map frame
Does this tell you anything about where obstacles are in the map? Does this tell you anything about where we are in the map?
We must link frames together
Sensor frame
𝛦𝑨 X Z Distance measurements returned by LIDAR
𝛦𝑨 𝛦𝑦 X Y Z The scan Values from the LIDAR will not tell us how far away are the obstacles. We must take care of the
𝛦𝑨 X Z Y
Note: Axes X,Y,Z of Frames of Reference are orthogonal(90o) to each other. X,Y,Z represent the axes along the 3 dimensions.
X Z Y
Y Y Map frame Car frame
Between frames there will exist transformations that convert measurements from one frame to another
laser frame Important Point: Note what the transformation means w.r.t frames
X Z Y Y Y Y Y Y Map frame Car frame laser frame
Between frames there will exist transformations that convert measurements from one frame to another There should exist a relationship Between these frames Transform from map to car Transform from car to laser
Play-Doh: Obviously not a rigid body
The distance between any two given points of a rigid body remains constant in time regardless of external forces exerted on it.
XA YA ZA
XA YA ZA
XA YA ZA 𝜄
XA YA ZA 𝜄
A B
d
XA YA ZA 𝜄
A B
d
B p
XA YA ZA 𝜄
A B
d
B p
A p
XA YA ZA 𝜄
A B
d
B p
A p
Frame A, given its pose in Frame B
XA YA ZA 𝜄
𝑌𝐶 = 𝑆11 𝑌𝐵 + 𝑆21 𝑍
𝐵 + 𝑆31
𝑎𝐵 𝑍
𝐶 = 𝑆12
𝑌𝐵 + 𝑆22 𝑍
𝐵 + 𝑆32
𝑎𝐵 𝑎𝐶 = 𝑆13 𝑌𝐵 + 𝑆23 𝑍
𝐵 + 𝑆33
𝑎𝐵
XA YA ZA 𝜄
A B
Takes points in frame B and represents their
frame A
XA YA ZA 𝜄
Cosine component Sine component
𝐵+0 ×
B p
(0,5,0)
A B
𝑌𝐵 𝑍
𝐵
𝑎𝐵
𝐶
A B
We now have the point P as referenced in frame A Known Known
A p
XA YA ZA (-2.5,4.3,0)
A p
XA YA ZA
A A B B
XA YA ZA XA YA ZA Origins of both the frames are at the same location
A B
d
XA YA ZA XB YB ZB XB YB ZB
Frame A, given its pose in Frame B
XA YA ZA 𝜄
A B
d
A A A B B B
𝑞
A p
A A B B
B p
1
A B
C S R S C
A B
H = Homogenous transformations that transforms
measurements in Frame B to those in Frame A
Map Frame
MAP FRAME
Has no parent Child of world frame world_frame map Note: Tf = transformer class
A tf tree is a structure that maintains relations between the linked frames.
Notice how the uncertainty increases Initial Position
world_frame map
Note that if the frame is connected in the tf tree, we can obtain a representation of that frame with any other frame in the tree Tf tree
world_frame map
base_link Tf tree
Map frame (0,0,0) X Y Z
(2,0,0) New sensor reading gives us new information Jump in position, i.e, not continuous Map frame (0,0,0) X Y Z
𝛦𝑨 𝛦𝑦 X Y Z
Final placement of odom and map frames – after robot has moved some distance Initial placement of odom and map frames