Quantcast
Channel: Library Questions - Processing 2.x and 3.x Forum
Viewing all articles
Browse latest Browse all 2896

Panorama using a webcam

$
0
0

Hello, first time here!

I'm trying to stitch images from frames of a webcam. Actually, I want to make a mosaic(or panorama) image, like shown on this videos:

https://youtu.be/aXYnAfZ4bD4?t=15s

https://youtube.com/watch?v=QapSxGnUWtY&t

I don't need to correct lenses, rotation or perspective. I only need to move the image in XY plane, like de first video. Even if I move only on X is ok for now.

So far, I was able to stitch 2 static images. But when I try to make this with a camera, something goes wrong :(

This is my code:

    import boofcv.processing.*;
    import boofcv.struct.image.*;
    import boofcv.struct.feature.*;
    import georegression.struct.point.*;
    import java.util.*;
    import processing.video.*;

    Capture video;

    PImage prevFrame;
    PImage current;

    int leu = 0;
    int c = 1;
    float avgX = 0;
    float avgY = 0;

    List<Point2D_F64> locations0, locations1;      // feature locations
    List<AssociatedIndex> matches;      // which features are matched together

    void setup() {

      size(600, 240);
      video = new Capture(this, 320, 240, 30);
      video.start();

      prevFrame = createImage(video.width, video.height, RGB);
      current = createImage(video.width, video.height, RGB);
    }
    void detectar() {
      SimpleDetectDescribePoint ddp = Boof.detectSurf(true, ImageDataType.F32);  //use SURF
      SimpleAssociateDescription assoc = Boof.associateGreedy(ddp, true);

      // Find the features
      ddp.process(prevFrame);
      locations0 = ddp.getLocations();
      List<TupleDesc> descs0 = ddp.getDescriptions();

      ddp.process(current);
      locations1 = ddp.getLocations();
      List<TupleDesc> descs1 = ddp.getDescriptions();

      // associar os pontos
      assoc.associate(descs0, descs1);
      matches = assoc.getMatches();
    }

    void draw() {

      loadPixels();
      video.loadPixels();
      prevFrame.loadPixels();

      detectar();
      int count = 0;
      for ( AssociatedIndex i : matches ) {
        if ( count++ % 20 != 0 )
          continue;
        else if ( count > 100)
        {
          break;
        }

        Point2D_F64 p0 = locations0.get(i.src);
        Point2D_F64 p1 = locations1.get(i.dst);
        float diferencaX = abs((float) p0.x - (float)p1.x);
        float diferencaY = abs((float) p0.y - (float)p1.y);

        if (leu < 30) {

          if (diferencaY < 15) {
            avgX = avgX + (diferencaX - avgX)/c;
            avgY = avgY + (diferencaY - avgY)/c;
            c++;
          }
        }

        if ( leu == 29) {
          if (avgX > 300) {
            translacao();
          }
        }
        leu++;
      }
    }

    void translacao() {
      image( prevFrame, 0, 0 );
      image( current, avgX, -avgY);
      updatePixels();
      println(avgX);
      c = leu = 0;
      avgX = avgY = 0;
    }

    void captureEvent(Capture video) {
      // Save previous frame for motion detection!!
      // Before we read the new frame, we always save the previous frame for comparison!
      if (avgX > 300) {
        prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
        prevFrame.updatePixels();  // Read image from the camera
      }
      video.read();
      current.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
      current.updatePixels();
    }

I appreciate any help! Thank you!


Viewing all articles
Browse latest Browse all 2896

Trending Articles