Hello guys once again im back looking to work out something to work around the whole 3D space while keeping track of your own space, i will break that down cuz i know it does sound weird.
Well lets say that i will be using the live feed from a webcam and the cam its facing a room (this room is that cube i speak of in the main question) in this room we will have a cylinder or a ellipse flat on what would be the ground now this ellipse will be spinning at a steady pace/rate and basically i need processing to keep track of the direction and spin of the actually object in front of the cam in that room, finally not only keep track as explain above but as we move in about that room/cube we will keep track of where this spinning obj will be in respective to the camera so that at anytime if the cam was to turn back and look back at the direction where this ob was at we will still see the obj just from another perspective if that makes sense ?
now pretending that we are that cam i speak of and we walk into a festival and we see this carousel spinning as we walk through the festival i know that if i look towards that direction where the carousel was at i know that last time i seen it it was spinning in that direction.
here is what i think we should use to accomplish this task.
- Arduino
- IMU MPU6050 "this should help in keeping track of where the cam is at (in case the cam is not enough to do the trick)
- USB Webcam
as far as the imu goes i believe i can send Processing data from the IMU axis on its accel & gyro values and we should be able to setup some sort of variable which will keep the data from the the imu current state PLUS the current frame of the camera and its POS/LOC xy&z respectfully, basically if this current values from IMU are these numbers then those numbers are for this frame and if the gyro offset from what suppose to be the first frame then it means we moved on that direction and a simple for loop should increment its own internal Cartesian system to keep track of where we are in space (kinda like how watch dog works in arduino programming).
ok now as for the webcam magic, i believe that if we use the following functions from this programs i found i think we can make this program im thinking of.
1)
import peasy.*; //peasyCam
PeasyCam cam; //peasyCam
//set-up
void setup() {
size(800, 600, P3D); //screen size and 3D renderer
cam = new PeasyCam(this, 0, 0, 0, 500); //peasyCam inital lookAt(x,y,x,distance)
cam.setMinimumDistance(200); //minimum camera distance from subject, constrains zoom
cam.setMaximumDistance(500); //maximum camera distance from subject, constrains zoom
} //end setup()
//begin draw loop
void draw() {
background(0); //clears screen
//draw box just for a visual reference in the 3D space
noFill(); //box fill colour
stroke(150, 150, 150, 255); //box line colour
strokeWeight(2); //box line thickness
box(300, 300, 300); //box size (x,y,z)
float[] rotations = cam.getRotations(); //camera rotations in model space (x,y,z)
float lookedAt[]=cam.getLookAt(); //lookedAt coordinates in model space (x,y,z)
float camPos[]=cam.getPosition(); //camera coordinates in model space (x,y,z)
float distance=dist(lookedAt[0], lookedAt[1], lookedAt[2], camPos[0], camPos[1], camPos[2]); //distance from camera to looked at point
//this finds the position of the mouse in model space
float[] mousePos=new float[3]; //mouse coordinates in model space (x,y,z)
pushMatrix();
rotateX(rotations[0]);
rotateY(rotations[1]);
rotateZ(rotations[2]);
mousePos[0]=modelX(mouseX-width/2, mouseY-height/2, distance);
mousePos[1]=modelY(mouseX-width/2, mouseY-height/2, distance);
mousePos[2]=modelZ(mouseX-width/2, mouseY-height/2, distance);
popMatrix();
//vector from camera position to mouse position in model space
PVector camToMouse;
camToMouse=new PVector(mousePos[0]-camPos[0], mousePos[1]-camPos[1], mousePos[2]-camPos[2]);
//vector from camera position to looked at position in model space
PVector camToLookedAt;
camToLookedAt=new PVector(lookedAt[0]-camPos[0], lookedAt[1]-camPos[1], lookedAt[2]-camPos[2]);
//line from origin to 'mirrored' mouse position
line(0,0,0,mousePos[0]+2*camToLookedAt.x,mousePos[1]+2*camToLookedAt.y,mousePos[2]+2*camToLookedAt.z);
//prints distance from mouse position to mirrored mouse position (just to check!)
println(dist(mousePos[0],mousePos[1],mousePos[2],mousePos[0]+2*camToLookedAt.x,mousePos[1]+2*camToLookedAt.y,mousePos[2]+2*camToLookedAt.z));
} //end draw()
2)
import shapes3d.*;
import shapes3d.animation.*;
import shapes3d.utils.*;
import processing.video.*;
import gab.opencv.*;
import processing.video.*;
/*
mix this with cam3D cube
*/
//Movie mov;
Capture mov;
OpenCV opencv;
Shape3D[] shapes;
Box box;
int shapesNumTot = 1;
float angleX, angleY, angleZ;
void setup() {
size(1000, 750, P3D);
mov = new Capture(this, 1280/2, 720/2, "Microsoft LifeCam Front");
opencv = new OpenCV(this, 1280/2, 720/2);
mov.start();
// mov = new Movie(this, "Shots.mp4");
// mov.loop();
textureMode(NORMAL);
shapes = new Shape3D[shapesNumTot];
box = new Box(this);
//box.setTexture(mov, Box.FRONT); // if I try this instead it blocks my PC.
box.setTexture(mov); // this works as expected..
box.setSize(200, 200, 200);
box.drawMode(S3D.TEXTURE);
shapes[0] = box;
}
void draw() {
background(0);
pushMatrix();
camera(0, 0, 300, 0, 0, 0, 0, 1, 0);
// camera(70.0, 35.0, 120.0, 50.0, 50.0, 0.0,
// 0.0, 1.0, 0.0);
angleX += radians(0.913f);
angleY += radians(0.799f);
angleZ += radians(1.213f);
rotateX(angleX);
rotateY(angleY);
rotateZ(angleZ);
for (int i = 0; i < shapesNumTot; i++) {
shapes[i].draw();
}
popMatrix();
}
void movieEvent(Movie m) {
m.read();
}
void captureEvent(Capture c) {
c.read();
}
at this point im not sure what algorithms i should implement or where to tie up the imu varibles coming over serial but i do want to do something similar to how we would use mouseX,mouseY but instead replace those 2 with the variables corresponding to the imu x and y please advise of if anyone knows of a example close to this functions i need that be super!