Quantcast
Channel: Library Questions - Processing 2.x and 3.x Forum
Viewing all 2896 articles
Browse latest View live

Image Processing using Processing

$
0
0

I am currently working on a project using Processing. I need to do image processing inside my project, initially I thought of using opencv. But unfortunately I found out that opencv for Processing is not the complete version of the original one. How can I start doing image processing using Processing? I found that since processing is a wrapper of java, java language is accepted. Can I use JavaCV inside processing? If so, how?


How can i track image and play video for particular image track?

$
0
0

I have try with JMyron lib but it show me error in processing 3. The error is "Lib is not installed properly".

May be guess its works on older version processing.

So how can i track different images and play video on different image track.Like if i show one image on webcam so according to that images my 1st video is play.

But i don't know how to track image on webcam in processing latest version 3.Any one guide me in here.

Thanks.

How to import javax.media.opengl in Processing 3.0 ?

$
0
0

Hey!

I'm trying to run the example of MSAFluid "MSAFluidDemo" in Processing 3 and give me the error:

The package "javax.media.opengl" does not exist. You might be missing a library.

The same example works well in Processing 2.2.1 in the same computer:
MacBook Pro OS X 10.11.6

Could not run the sketch (Target VM failed to initialize) error

$
0
0

Hi to all, i'm new to programming and to processing. I've to do a school project, but i've some trouble with my mac.

this code (give to me by my teacher, for example) cause an error, and other example sketch make the same.

// SOUNDS import processing.sound.*;

// SOUND VARIABLES SoundFile cinguettio;

// ANIMATION VARIABLES int anim = 0;

void setup() {
size(160,90);

// frameRate(25); smooth(); noStroke(); }

void draw() {

anim=anim+1; myVis();

}

void myVis() { cinguettio = new SoundFile(this, "cinguettio.mp3");

PImage cava = loadImage("CaldanaGavorranoQuarry1_256.jpg"); // cava.resize(500, 0); imageMode(CENTER); image(cava, width/2, height/2);

PFont myFont = loadFont("SansSerif-12.vlw"); textFont(myFont); textAlign(CENTER); fill(255); text("I prati e le siepi intorno alla cava ...", width/2+anim, height/2);

cinguettio.amp(0.02); cinguettio.play();

}

The error is:

RtApiCore::getDefaultInputDevice: No default device found!

libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: RtApiCore::probeDeviceOpen: the device (0) does not support the requested channel count. Could not run the sketch (Target VM failed to initialize). For more information, read revisions.txt and Help → Troubleshooting.

Can someone help me?

ControlP5 / DropdownList: select item programatically

$
0
0

I don't see any possibility to set a default selection for a ControlP5 DropdownList. In older posts (i can't find anymore) i stumbled over setIndex() and activate(), both not available in the current version. There ist setValue(), but it doesn't seem to do anything... anything i missed?

g4p ok switch with text under

$
0
0

It must be very simple - but I cannot find this option ... I use the GUI Builder to do simple button. I want to put the text under the picture but I could not find how :-( Thank you. David.

How do I check bit combinations in .mp3 and .flac files?

$
0
0

I'm doing some research, and I'm particularly interested in the way bit combinations work in .mp3 and .flac files, can I find common bit combinations in .mp3 files, can I do that with .flac files, can i compare both of them? That kind of stuff.

In the experiment I will be checking 4 types of playlists of different artist and genres. The former will be provided both in .mp3 format and also .flac. The playlists song contents will be analyzed with decoders and the number of bit combination occurrences collected. Due to the number of songs in the playlist and the bitrate of them varying (at least I assume that) I will divide the collected results, firstly, by number of songs and, secondly - bitrate. All off this would be repeated for all the other playlists. The given results should then show if common bit combinations exist and whether .mp3 and .flac formats share them.

Can anyone of you tell me what i can use with processing to find this out or any other alternatives?

MidiBus send midi message

$
0
0

I try to send CC message or Note to an other application, but I fail :( I try with Vezer to send at Daslight4, that's work fine. So I want make a samething with my sketch not with Vezer

here my bad try to do :

import themidibus.*;
MidiBus bus ;
void setup() {
bus = new MidiBus(this, 0, "Stuff") ;
bus.addOutput("Stuff") ;
}
void draw() {
int channel = 0 ;
int number = 23 ;
int value = (int)random(127) ;
bus.sendControllerChange(channel, number, value) ;
}

trouble with an avatar playing music

$
0
0

Hi! I'm new at processing, well at programming. And I'm trying to make a avatar using the librarie ttslib and I wanted it to play music, using the librarie minim, but I can't make the songs to pause. I leave my code for some help, please Thanks

// REPRODUCTOR MP3
void websocketOnMessage(WebSocketConnection con, String msg){
  msg = msg.toLowerCase();

  if (msg.contains ("reproducir")||(msg.contains("escuchar"))&& msg.contains("música")
  && atencion == true){
      reproducirmp3 = true;
      habla=true;
      tts.speak("which number");
      habla=false;
  }
  if(reproducirmp3==true){

   if(msg.contains ("1")){
     //carga la lista de reproducción
      player1 = minim.loadFile("track_1.mp3");
      habla=true;
      tts.speak("playing track one");
      habla=false;
      player1.play();
    }
    if(msg.contains("2")){
      player2 = minim.loadFile("track_2.mp3");
      habla=true;
      tts.speak("playing track two");
      habla=false;
       player2.play();
    }
    if(msg.contains("3")){
       player3 = minim.loadFile("track_3.mp3");
       habla=true;
       tts.speak("playing track three");
       habla=false;
       player3.play();
    }
      if(msg.contains("4")){
      player4 = minim.loadFile("track_4.mp3");
      habla=true;
      tts.speak("playing track four");
      habla=false;
      player4.play();
    }
      if(msg.contains("5")){
      player5 = minim.loadFile("track_5.mp3");
      habla=true;
      tts.speak("playing track five");
      habla=false;
      player5.play();
    }
  }
   if (msg.contains("parar") && (msg.contains("música") && reproducirmp3==true && atencion== true)){
     reproducirmp3=false;
       player1.pause();
       player2.pause();
       player3.pause();
       player4.pause();
       player5.pause();
   }

Please find out the problem with this code (LoZ game)

$
0
0

I am trying to make a LoZ game in Processing. However, anytime I try moving left (using the button 'a'), the character doesnt stop moving with out another input. Its very annoying and no one i know can see the problem. Can someone take a look and help me out? Here is code. No other key has a problem. i even changed the walk left key to other keys then 'a' but it doesnt work.

`import processing.sound.*; SoundFile bitmusic; PImage walkRight; PImage walkDown; PImage attack; PImage walkLeft; PImage walkUp; int counter; int x = 0; int y = 0; int dx = 0; int dy = 0; boolean isWalkingRight = false; boolean isWalkingDown = false; boolean isAttacking = false; boolean isWalkingLeft = false; boolean isWalkingUp = false; void setup() { frameRate(30);

bitmusic = new SoundFile(this, "windfish.mp3");

bitmusic.play(); loop(); walkRight = loadImage("linkwalking.gif"); walkDown = loadImage("fowardwalking.gif"); walkLeft = loadImage("walkleft.png"); attack = loadImage("attack.png"); walkUp = loadImage("walkback.png"); size(2000,1000); noStroke(); } void draw()

{ background(150,260,0); dx = 0; dy = 0; pushMatrix(); if(keyPressed) { if (key == 'd') { dx = dx+8; if(!isWalkingRight) { image(walkRight,x,y); isWalkingRight = true; } } if (key == 'w') { dy = dy-8; if(!isWalkingUp) {image(walkUp,x,y); isWalkingUp = true; } } if (key == 's') { dy = dy+8; if(!isWalkingDown) { image(walkDown,x,y); isWalkingDown = true;

 }

}

} if (key == 'a') { dx = dx -8; if(!isWalkingLeft) { image(walkLeft,x,y); isWalkingLeft = true; } } if (key == 'o') { dy = dy-0; if (!isAttacking) { image(attack,x,y); isAttacking = true; } } isWalkingUp = false; x = x+dx; y = y+dy; isAttacking = false; x = x+dx; y = y+dy; isWalkingRight = false; x = x+dx; y = y+dy; isWalkingLeft = false; x = x+dx; y = y+dy; isWalkingDown = false; x = x+dx; y = y+dy;

if(!keyPressed) {

translate (x,y); {

fill(#C16915); rect(90,130,30,20); rect(110,120,10,10); rect(10,70,60,60); rect(80,110,20,10); rect(90,100,30,10); rect(80,90,20,10); rect(70,80,10,10); rect(50,10,60,20); rect(40,20,10,10); rect(120,90,10,10); fill(#E0AE7F); rect(20,130,50,10); rect(70,90,10,40); rect(30,80,10,40); rect(20,90,30,10); rect(30,50,100,10); rect(20,30,120,20); rect(50,60,60,10); rect(70,70,30,10); rect(20,10,10,20); rect(130,10,10,20); rect(20,50,10,10); rect(130,70,40,30); /dark blue sword)/ fill(#10C41D); rect(80,120,30,10); rect(100,110,30,10); rect(120,100,10,10); rect(80,100,10,10); rect(80,80,50,10); rect(100,90,20,10); rect(100,70,30,10); rect(110,60,10,10); rect(40,-10,80,20); rect(40,10,10,10); rect(110,10,10,10); rect(60,30,10,10); rect(90,30,10,10); rect(30,20,20,20); rect(10,60,10,10); rect(20,50,20,10); rect(20,60,10,10); rect(30,60,10,10); rect(50,-20,60,10); fill(#C16915); rect(40,20,10,30); rect(110,20,10,30); rect(90,40,10,10); rect(60,40,10,10); rect(70,60,20,10); rect(120,60,20,10); rect(130,50,10,10); fill(#E0AE7F); rect(30,50,10,10); } } popMatrix(); }`

Peasycam Rotation Sensitivity

$
0
0

Hi there, I have a really straight forward question: Does anyone know how to make peasycam rotations slower or less sensitive? I only found ways to change the zoom-sensitivity.

Thank you soo much in advance!

error with my execution (fisica NoSuchMethodError)

$
0
0

hi guys please i need help, i am actully workng a monoploy game development and i have everything set but I have this error message while I run the application:

Using this database: C:\Users\-\Desktop\Processing\-\data\-.db
Exception in thread "AWT-EventQueue-0" java.lang.NoSuchMethodError: org.jbox2d.dynamics.World.<init>(Lorg/jbox2d/collision/AABB;Lorg/jbox2d/common/Vec2;Z)V
    at fisica.FWorld.<init>(Unknown Source)
    at fisica.FWorld.<init>(Unknown Source)
    at sheergrace.setup(sheergrace.java:203)
    at processing.core.PApplet.handleDraw(PApplet.java:2281)
    at processing.opengl.PJOGL$PGLListener.display(PJOGL.java:799)
    at jogamp.opengl.GLDrawableHelper.displayImpl(GLDrawableHelper.java:590)
    at jogamp.opengl.GLDrawableHelper.display(GLDrawableHelper.java:574)
    at javax.media.opengl.awt.GLCanvas$9.run(GLCanvas.java:1218)
    at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1036)
    at jogamp.opengl.GLDrawableHelper.invokeGL(GLDrawableHelper.java:911)
    at javax.media.opengl.awt.GLCanvas$10.run(GLCanvas.java:1229)
    at javax.media.opengl.Threading.invoke(Threading.java:193)
    at javax.media.opengl.awt.GLCanvas.display(GLCanvas.java:492)
    at javax.media.opengl.awt.GLCanvas.paint(GLCanvas.java:546)
    at sun.awt.RepaintArea.paintComponent(Unknown Source)
    at sun.awt.RepaintArea.paint(Unknown Source)
    at sun.awt.windows.WComponentPeer.handleEvent(Unknown Source)
    at java.awt.Component.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
    at java.awt.EventQueue.access$200(Unknown Source)
    at java.awt.EventQueue$3.run(Unknown Source)
    at java.awt.EventQueue$3.run(Unknown Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
    at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
    at java.awt.EventQueue$4.run(Unknown Source)
    at java.awt.EventQueue$4.run(Unknown Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
    at java.awt.EventQueue.dispatchEvent(Unknown Source)
    at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.run(Unknown Source)
Already called beginDraw()
Using this database: C:\Users\-\Desktop\Processing\-\data\-.db
TuioClient: failed to connect to port 3333
java.lang.NullPointerException
    at processing.mode.java.runner.Runner.findException(Runner.java:926)
    at processing.mode.java.runner.Runner.reportException(Runner.java:871)
    at processing.mode.java.runner.Runner.exceptionEvent(Runner.java:797)
    at processing.mode.java.runner.Runner$2.run(Runner.java:686)

HE_mesh Wireframe Modifier Problem

$
0
0

Hi, could somebody please help me??

when i use the wireframe modifier the mesh gets messed up and torn. It doesn´t even work in the example files or tutorials.

I´m using 3.2.1

Thank you

AR project : Saito OBJLoader + NyARToolkit

$
0
0

Hi !

I've learned processing (thanks to the wonderful Daniel Shiffman) few days ago to help in a student project based on AR. First, I did the three examples in the tutorial of Amnon Owed ( creativeapplications.net/processing/augmented-reality-with-processing-tutorial-processing/ ) and after a lot of fails, it finally worked.

Now, I try to do the same as the third example (cube) but instead of using beginshape/endshape, I want to input my own CAO model. The only thing I found is the library Saito OBJLoader which allows me to load a .obj file into Processing. But I meet always the same problem : "Material"MTLO" is not defined", each time the marker enter in the camera field (sorry about my English).

I actually selected a material for my model in my CAO software (PTCCreo Parametric) and saved it as a .obj file.

I'm sorry, i let the "mountain growth" and the loops because it doesn't work without and i actually don't know how to change it.

    // Augmented Reality Dynamic Example by Amnon Owed (21/12/11)
    // Processing 1.5.1 + NyARToolkit 1.1.6 + GSVideo 1.0

    import java.io.*; // for the loadPatternFilenames() function
    import processing.opengl.*; // for OPENGL rendering
    import jp.nyatla.nyar4psg.*; // the NyARToolkit Processing library
    import codeanticode.gsvideo.*; // the GSVideo library
    import saito.objloader.*;

    // a central location is used for the camera_para.dat and pattern files, so you don't have to copy them to each individual sketch
    // Make sure to change both the camPara and the patternPath String to where the files are on YOUR computer
    // the full path to the camera_para.dat file
    String camPara = "C:/Users/layatlu/Documents/Processing/libraries/nyar4psg/data/camera_para.dat";
    // the full path to the .patt pattern files
    String patternPath = "C:/Users/layatlu/Documents/Processing/libraries/nyar4psg/patternMaker/examples/ARToolKit_Patterns";
    // the dimensions at which the AR will take place. with the current library 1280x720 is about the highest possible resolution.
    int arWidth = 800;
    int arHeight = 600;
    // the number of pattern markers (from the complete list of .patt files) that will be detected, here the first 10 from the list.
    int numMarkers = 10;

    // the resolution at which the mountains will be displayed
    int resX = 60;
    int resY = 60;
    // this is a 2 dimensional float array that all the displayed mountains use during their update-to-draw routine
    float[][] val = new float[resX][resY];

    GSCapture cam;
    MultiMarker nya;
    float[] scaler = new float[numMarkers];
    float[] noiseScale = new float[numMarkers];
    float[] mountainHeight = new float[numMarkers];
    float[] mountainGrowth = new float[numMarkers];

    OBJModel model;

    void setup() {
      size(1280, 720, P3D); // the sketch will resize correctly, so for example setting it to 1920 x 1080 will work as well
      cam = new GSCapture(this, 800, 600); // initializing the webcam capture at a specific resolution (correct/possible settings depends on YOUR webcam)
      cam.start(); // start capturing
      //model H
      model = new OBJModel(this, "h.obj");
      model.scale(80); //because model is too small
      model.translateToCenter();
      noStroke(); // turn off stroke for the rest of this sketch :-)
      // create a new MultiMarker at a specific resolution (arWidth x arHeight), with the default camera calibration and coordinate system
      nya = new MultiMarker(this, arWidth, arHeight, camPara, NyAR4PsgConfig.CONFIG_PSG);
      // set the delay after which a lost marker is no longer displayed. by default set to something higher, but here manually set to immediate.
      nya.setLostDelay(1);
      String[] patterns = loadPatternFilenames(patternPath);
      // for the selected number of markers, add the marker for detection
      // create an individual scale, noiseScale and maximum mountainHeight for that marker (= mountain)
      for (int i=0; i<numMarkers; i++) {
        nya.addARMarker(patternPath + "/" + patterns[i], 80);
        scaler[i] = random(0.8, 1.9); // scaled a little smaller or bigger
        // noiseScale[i] = random(0.02, 0.075); // the perlin noise scale to make it look nicely mountainy
        // mountainHeight[i] = random(75, 150); // the maximum height of a mountain
      }
    }

    void draw() {
      // if there is a cam image coming in...
      if (cam.available()== true) {
        cam.read(); // read the cam image
        background(0); // a background call is needed for correct display of the marker results
        image(cam, 0, 0, width, height); // display the image at the width and height of the sketch window
        // create a copy of the cam image at the resolution of the AR detection (otherwise nya.detect will throw an assertion error!)
        PImage cSmall = cam.get();
        cSmall.resize(arWidth, arHeight);
        nya.detect(cSmall); // detect markers in the image
        println(MultiMarker.VERSION);
        drawMountains(); // draw dynamically flowing mountains on the detected markers (3D)
      }
    }

    // this function draws correctly placed 3D 'mountains' on top of detected markers
    // while the mountains are displayed they grow (up to a certain point), while not displayed they return to the zero-state
    void drawMountains() {
      // set the AR perspective uniformly, this general point-of-view is the same for all markers
      nya.setARPerspective();
      // turn on some general lights (without lights it also looks pretty cool, try commenting it out!)
      //lights();
      // for all the markers...
      for (int i=0; i<numMarkers; i++) {  // if the mountainGrowth is higher than zero, decrease by 0.05 (return to the zero-state), then continue to the next marker
        if ((!nya.isExistMarker(i))) {
          if (mountainGrowth[i] > 0) {
            mountainGrowth[i] -= 0.05;
          }
          continue;
        }
        // the following code is only reached and run if the marker DOES EXIST
        // if the mountainGrowth is lower than 1, increase by 0.03
        if (mountainGrowth[i] < 1) {
          mountainGrowth[i] += 0.03;
        }
        // the double for loop below sets the values in the 2 dimensional float array for this mountain, based on it's noiseScale, mountainHeight and index (i).
        float xoff = 0.0;
        for (int x=0; x<resX; x++) {
          xoff += noiseScale[i];
          float yoff = 0;
          for (int y=0; y<resY; y++) {
            yoff += noiseScale[i];
            val[x][y] = noise(i*10+xoff+frameCount*0.05, yoff) * mountainHeight[i]; // this sets the value
            float distance = dist(x, y, resX/2, resY/2);
            distance = map(distance, 0, resX/2, 1, 0);
            if (distance < 0) {
              distance = -distance;
            } // this line causing the four corners to flap upwards (try commenting it out or setting it to zero)
            val[x][y] *= distance; // in the default case this makes the value approach zero towards the outer ends (try commenting it out to see the difference)
          }
        }
        PMatrix syst3D;

        // get the Matrix for this marker and use it (through setMatrix)
        syst3D = nya.getMarkerMatrix(i);
        setMatrix(syst3D);
        scale(1, -1); // turn things upside down to work intuitively for Processing users
        scale(scaler[i]); // scale the mountain by it's individual scaler
        translate(-resX/2, -resY/2); // translate to center the mountain on the marker
        // for the full resolution...
        for (int x=0; x<resX-1; x++) {
          for (int y=0; y<resY-1; y++) {
            // each face is a Shape with a fill color, together they make a colored mountain
            model.disableMaterial();
            fill(255, 0, 0);
            model.draw();
          }
        }
      }

      // reset to the default perspective
      perspective();
    }



    // this function loads .patt filenames into a list of Strings based on a full path to a directory (relies on java.io)
    String[] loadPatternFilenames(String patternPath) {
      File folder = new File(patternPath);
      FilenameFilter pattFilter = new FilenameFilter() {
        public boolean accept(File dir, String name) {
          return name.toLowerCase().endsWith(".patt");
        }
      };
      return folder.list(pattFilter);
    }

HE_Mesh Examples not working

$
0
0

Hi,

i downloaded the current hemesh library from http://www.wblut.com/he_mesh/ and some of the examples work. But most of the times it says the functions don´t exist.

For example: HE_FaceIterator() doesn´t exist, from Tut006_Mesh_Selection. WB_RBSpline doesn´t exist from spielerei/NurbsGrid. Not one of the color demos in /core works. and so on...

Am i missing a library?? am i using a correct version? I read in another discussion that a lot of methods have changed. i´m using processing 3.2.1 on win 7. I tried the stable build and the daily build.

thanks


Capture audio output from mac OS?

$
0
0

Hello,

I'm sure it's been discussed before but Google is having trouble because Processing returns results for actually processing audio, rather than audio and Processing.

My quick question is - can I capture the system audio out using Processing, and where are the docs on whatever module I need to do that.

Thanks!

Hype Framework based Question. Can I reset my mouseX, mouseY when the mouse is not in the window?

$
0
0

Here Is my code so far. What I'm trying to do is repel the dots away from the mouse but when the mouse leaves the boundaries of the window I'd like the dots to settle back to their grid position. Can this be done with an if else statement maybe?

import hype.*;
import hype.extended.layout.HGridLayout;
import hype.extended.behavior.HAttractor;

HDrawablePool pool;
HAttractor    ha;
HAttractor.HAttractionForce hf;


void setup() {
    size(1364,640);
    H.init(this).background(#242424);

    ha = new HAttractor()
        .repelMode()
        .addForce(320, 320, 200)
        .debugMode(false);
    hf = ha.getForce(0);

    pool = new HDrawablePool(1248);
    pool.autoAddToStage()
        .add(new HRect(5).rounding(100))
        .layout(new HGridLayout().startX(21).startY(21).spacing(26,26).cols(52))
        .onCreate(
             new HCallback() {
                public void run(Object obj) {
                    HDrawable d = (HDrawable) obj;
                    d.noStroke().fill(#000000).anchorAt(H.CENTER);

                    ha.addTarget(d, 8, 1f);

                }
            }
        )
        .requestAll()
    ;
}

void draw() {
    hf.loc(mouseX, mouseY);
    H.drawStage();
}

ControlP5 Text Field - Integer filter does not allow minus key to enter negative numbers

$
0
0

Great Library.

Please let me know if there is a workaround other than using broader input filter.

G4P multi-windows fullscreen?

$
0
0

Hi all, I'm working with the G4P library to create a two-window sketch to run on a computer with two screens. My goal is to have one 'controls' window on one screen, and another 'display' window running fullscreen on the second display.

I have a semi-working sketch based on the G4P example. My question is how to get the GWindow to go fullscreen. I know how to do this in a one-window Processing sketch, just can't figure out how with the GWindow methods.

Here's the demo code I'm playing with at the moment. Any help is greatly appreciated!

/*
make a 2-window sketch, with controls in one that affect the other

*/

import g4p_controls.*;

GSlider sdr1, sdr2;
GWindow secondWin;

void settings(){
  //fullScreen(P3D, 2);
  size(600, 280);
}

void setup() {

  secondWin = GWindow.getWindow(this, "Window 2", 300, 300, 200, 200, P3D);
  secondWin.addDrawHandler(this, "windowDraw");

  //=============================================================
  // Simple default slider,
  // constructor is `Parent applet', the x, y position and length
  sdr1 = new GSlider(this, 20, 20, 260, 50, 10);
  // show          opaque  ticks value limits
  sdr1.setShowDecor(false, true, true, true);
  sdr1.setNbrTicks(5);
  sdr1.setLimits(100, 0, 200);
  //=============================================================
  // Slider with a custom skin, check the data folder to find
  // the `blue18px' folder which stores the used image files.
  sdr2 = new GSlider(this, 20, 80, 260, 50, 10);
  // show          opaque  ticks value limits
  sdr2.setShowDecor(false, true, false, true);
  // there are 3 types
  // GCustomSlider.DECIMAL  e.g.  0.002
  // GCustomSlider.EXPONENT e.g.  2E-3
  // GCustomSlider.INTEGER
  sdr2.setNumberFormat(G4P.DECIMAL, 3);
  sdr2.setLimits(100, 0, 200);
  sdr2.setShowValue(false);
}

void draw(){
  background(0);
  fill(255);
  //ellipse(sdr1.getValueF(),  sdr2.getValueF(), 10, 10);
}



public void windowDraw(PApplet appc, GWinData data) {
   appc.background(0);
   appc.fill(255);
   appc.ellipse(sdr1.getValueF(),  sdr2.getValueF(), 10, 10);
}

XBee API - NoClassDefFoundError: gnu/io/SerialPortEventListener

$
0
0

I've been trying to run sketches from a book "Building Wireless Sensor Networks" and also the webpage

http://code.google.com/p/xbee-api/wiki/Processing

which both use the Xbee-api.

I've tried to run in Processing 2.1.1 and 1.5.1 but keep having the same problem with the line

xbee.open("COM4", 9600);

breaking with the above error.

Has anyone any idea how to fix this?

I found one thread http://forum.processing.org/one/topic/classnotfoundexception-gnu-io-serialporteventlistener.html but the fix seems to be specific for a MacOS whereas I'm running in Windows 7 professional.

Viewing all 2896 articles
Browse latest View live