Quantcast
Channel: Library Questions - Processing 2.x and 3.x Forum
Viewing all 2896 articles
Browse latest View live

Converting FFT Amplitude to dB/Hz

$
0
0

Hello, I'm trying to use the Sound library to create a spectrogram. I looked at the FFTSpectrogram example that comes in the library. My goal is to create a spectrogram that looks similar to the output from MATLAB's spectrogram function (image below):

laurance

The whole spectrogram is generated in one image, it is not dynamic with the current audio. The y-axis is frequency (Hz), the x-axis is time (s), and the color axis is Power/frequency (dB/Hz). The FFT class has a spectrum object, but I'm not sure how to convert that value to dB/Hz.


export application doesn't work for video library

$
0
0

Hello, I have problems with exporting application in processing 3.0.2 on windows 7 (32-bit). I use video library and everything is ok when I run the sketch in processing, but starting exe files after export gives nothing. I have already seen some similar problems with gstreamer but still can't do anything with it in my case. Thanks for any help

Implementing a password field

$
0
0
import controlP5.*;

void setup(){
 size(480,640);
 ControlP5 id=new ControlP5(this);
 ControlP5 pw=new ControlP5(this);

 PFont font=createFont("arial", 20);

 //id input
 id.addTextfield("ID", 180,300,200,50)
   .setFont(font);

 pw.addTextfield("PASSWORD", 180,400,200,50);
 pw.setFont(font);
}

void draw(){
  background(0);
}

I made two textfields so that I can receive two inputs, one for ID and one for password. What I want to do is to cover password with '*'. How can I do it?

idpw

I want to cover password with *

Blank/Grey screen (frame differencing)

$
0
0

My code seems like a little bit buggy. Sometimes, i need to refresh few times to get the ideal results. This is very annoying and you need to keep on restart the processing and execute the code again. And there is no message in console. 3 problems happen in my code:

a) Grey screen (can hear video sound)
b) Normal video with sound (No frame differencing function)
c) Frame differencing + video (ideal)

import processing.video.*;

int numPixels;
int[] previousFrame;
Movie video;

void setup() {
  size(1280, 770);

  video = new Movie(this, "human.mp4");

  // Start capturing the images from the camera
   video.loop();
   video.read();
  numPixels = video.width * video.height;
  // Create an array to store the previously captured frame
  previousFrame = new int[numPixels];
  loadPixels();

}

void draw() {

  if (video.available()) {
 image(video, 0, 0);
    // When using video to manipulate the screen, use video.available() and
    // video.read() inside the draw() method so that it's safe to draw to the screen
    video.read(); // Read the new frame from the camera
    video.loadPixels(); // Make its pixels[] array available

    int movementSum = 0; // Amount of movement in the frame
    for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
      color currColor = video.pixels[i];
      color prevColor = previousFrame[i];
      // Extract the red, green, and blue components from current pixel
      int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
      int currG = (currColor >> 8) & 0xFF;
      int currB = currColor & 0xFF;
      // Extract red, green, and blue components from previous pixel
      int prevR = (prevColor >> 16) & 0xFF;
      int prevG = (prevColor >> 8) & 0xFF;
      int prevB = prevColor & 0xFF;
      // Compute the difference of the red, green, and blue values
      int diffR = abs(currR - prevR);
      int diffG = abs(currG - prevG);
      int diffB = abs(currB - prevB);
      // Add these differences to the running tally
      movementSum += diffR + diffG + diffB;
      // Render the difference image to the screen
      pixels[i] = color(diffR, diffG, diffB);
      // The following line is much faster, but more confusing to read
      //pixels[i] = 0xff000000 | (diffR << 16) | (diffG << 8) | diffB;
      // Save the current color into the 'previous' buffer
      previousFrame[i] = currColor;
    }
    // To prevent flicker from frames that are all black (no movement),
    // only update the screen if the image has changed.
    if (movementSum > 0) {
      updatePixels();
      println(movementSum); // Print the total amount of movement to the console
    }
  }
}

Add a library to a sketch directory for portability

$
0
0

Hello there!

Is it possible to include a library into the sketch directory, so sketches can be shared without having to install the libraries in the documents/processing/libraries directory?

I know it's anti-pattern, but I'm developing a solution where one computer install sketches on other, this would help a lot in portability.

Thanks in advance

Marlus

dont understand isRange() in Minim

$
0
0

Hi!

Im doing sound analysis with minim and dont understand isRange() method in the BeatDetect class In the reference it says

In frequency energy mode this returns true if at least threshold bands of the bands included in the range [low, high] have registered a beat. In sound energy mode this always returns false.

I thought that by writing something like this:

if ( beat.isRange(0, 4, 2) ) aBoolean = true;

would say this:

If the there is sound with amplitude of at least 2 dB between the frequence 0-4 return true (Of course I realise though that its not the case because of the very small values)

But then I dont understand either how to switch between the fields (and what is a field?)

frequency energy mode and sound energy mode

so might be something there?

any help very much apreciated!

Heres the reference for minim (the reference for isRange() returns 404 - not found though)

code.compartmental.net/minim/beatdetect_class_beatdetect.html

Good xml library?

$
0
0

What is a good xml library? I'm really done with the one from processing. It can't read big files. Other files take ages to load.

Performance issue with ArrayList

$
0
0

Hi everyone,

I am having performance issue with ArrayList. I have spend about a day trying to figure out what's the potential cause but with no success. I was wondering whether anyone could help me. The code is below. What I am trying to do is to create a sphere with points. I want to allow the user to control the number of points through the slider. But when I change the number of points to 2000 the sketch really slows down. Below is the code of my sketch.

Thank you very much for your time.

Best, Frank

import peasy.*;
import controlP5.*;

PeasyCam cam;
ControlP5 cp5;
PMatrix3D currCameraMatrix;
PGraphics3D g3;

float r = 100;
float alpha, beta;
float noiseScale = 0.003;
int numPoints = 100;

Slider pNum;
ArrayList<SPoint> particles = new ArrayList<SPoint>();

void setup() {
  size(600, 600, P3D);
  g3 = (PGraphics3D)g;

  background(0);
  frameRate(60);
  noSmooth();

  cp5 = new ControlP5(this);

  cp5.addSlider("numPoints")
     .setPosition(20, 20)
     .setRange(10, 2000)
     .setCaptionLabel("Number of Points")
     .setHeight(20)
     .setValue(numPoints)
     ;

  cp5.setAutoDraw(false);

  // Setup the camera
  cam = new PeasyCam(this, 250);

  noFill();
  stroke(5, 207, 255);

  // Initialize the points
  for(int i = 0; i < numPoints; i++) {
    particles.add(new SPoint(random(0, TWO_PI), random(0, TWO_PI), r));
  }
}

void draw() {
  pushMatrix();
  background(0);

  // Update the location of the points
  for(int i = 0; i < numPoints; i++) {

    // Decide whether the system needs to remove or add points responding to the slider
    if(numPoints < particles.size()) {
      particles.remove(particles.size()-1);
    } else if (numPoints > particles.size()) {
      particles.add(new SPoint(random(0, TWO_PI), random(0, TWO_PI), r));
    }
    particles.get(i).updateAngles(noiseScale);
    particles.get(i).update();
    println("numPoints: " + numPoints);
    println(particles.size());
  }
  popMatrix();
  gui();
}

void gui() {
  currCameraMatrix = new PMatrix3D(g3.camera);
  camera();
  cp5.draw();
  g3.camera = currCameraMatrix;
}

jsonObject is ambiguous

$
0
0

Now the code below is a mixture of two sketches. The goal here is that the user inputs a search term. Upon hitting enter the applet searches tinysong for the song and returns the song id (this works in its own sketch). The term is simultaneously searched for on twitter using the twitter4j library(this also works on its own.)

However when I put the two together i get the following error "jsonObject is ambiguous" ???????

for the purpose of this example I removed my api keys..

Any help would be awesome.

import controlP5.*;
//import java.net.*;
/////////////////////
ControlP5 controlP5;
controlP5.Label label;
Textfield myTextfield; /// setting up text-input field
int myBitFontIndex;
PFont p = createFont("Avenir",32);
///////////////////




/* TinySong API*/ // this is how you search the grooveshark database.

String tinyK = "";

int songIdNum;

String searchT; // the user inputs their query
String searchString;

ConfigurationBuilder cb = new ConfigurationBuilder();
Twitter twitterInstance;
Query queryForTwitter;


ArrayList tweets;
String[] userName = new String[0];
String[] tweetsArr = new String[0];


void setup(){

  size(1024,640);

  ////////////////////
  cb.setOAuthConsumerKey("");
  cb.setOAuthConsumerSecret("");
  cb.setOAuthAccessToken("");
  cb.setOAuthAccessTokenSecret("");

  twitterInstance = new TwitterFactory( cb.build() ).getInstance();

  ///////////////////
  int txtFw = 440;

  controlP5 = new ControlP5(this);
  controlP5.setControlFont(p);

  myTextfield = controlP5.addTextfield("enter a song name",(width/2)-(txtFw/2),(height/11)*2,txtFw,80);
  myTextfield.setFocus(true);
  //myTextfield.valueLabel().setFont(myBitFontIndex);
  myTextfield.valueLabel().style().marginTop = -6;

  ////////////////////////////////////

}

void draw(){

  background(0);
  finTweets();
}



void keyPressed(){

  if(key == ENTER || key == RETURN){

      searchT = myTextfield.getText();
      searchT = searchT.replaceAll(" ","+");

      FetchTweets();

      println(searchT);
      searchString = "http://tinysong.com/b/"+searchT+"?format=json&key="+tinyK;

  ///////////////////


  ///////////////////
  println(searchString);
    try{
        String url = searchString;

        JSONObject wholeJson = new JSONObject(join(loadStrings(url),""));
        JSONObject resp = wholeJson; // remove .getJSONObject(""); as this json file has no objects
        songIdNum = resp.getInt("SongID");
//        songIdNum = resp.getInt("m");

        println("The song ID for "+searchT+" is: "+songIdNum);

    }
    catch(JSONException e){
      println("there was an error parsing the JSONObject");
    }
  }

}

void FetchTweets(){

  String twitQ = "#"+searchT;
  queryForTwitter = new Query(twitQ);

  try{

    QueryResult result = twitterInstance.search(queryForTwitter);
    tweets = (ArrayList) result.getTweets();
    }
    catch (TwitterException te){
      println("couldn't connect this time" + te);
    }//end catch
}//end function


void finTweets(){

  for(int i=0; i<tweets.size(); i++){
      Status t = (Status) tweets.get(i);

      userName = append(userName, t.getUser().getScreenName());
      tweetsArr = append(tweetsArr, t.getText());// adds tweets to array
      //get geoLoc later
      text(userName[i] + "said: " + tweetsArr[i],20,15+i*30,width-20,40);

    }//end loop
}

Save/Load JSON based Cameras in Proscene

$
0
0

https://www.dropbox.com/sh/2riljmpqfywibaa/AADseXkl0ms0WOYcjPSjKow7a?dl=0

^ that's a link to my code

Someone else was apparently able to save/load cameras in proscene from a file: https://forum.processing.org/one/topic/proscene-saving-loading-camera-settings-to-from-file-what-are-the-key-settings-of-a-camera.html

But I'm having trouble implementing their code. I have a feeling it's a basic problem that I just don't understand. When I run this:

import processing.data.JSONObject;

//this is how you save the camera, theoretically;
//save a Proscene scene current camera settings in as a JSON string in a file
void saveCamera(Scene scene, String fileName ) {
  processing.data.JSONObject json = new JSONObject();
  json.setFloat("fov", scene.camera().fieldOfView() );
  setVector(json, "position", scene.camera().position() );
  setVector(json, "viewdirection", scene.camera().viewDirection() );
  setVector(json, "upvector", scene.camera().upVector() );
  saveJSONObject(json, fileName);
}
//Add a PVector as a string representation of a float array to a JSON object
void setVector(JSONObject json, String attributeName, Vec v) {
  json.setString( attributeName, Arrays.toString( new float[]{ v.x, v.y, v.z} ) );
}

//#####this is how you load the camera, theoretically;
void loadCamera(Scene scene, String fileName) {
  JSONObject json = loadJSONObject(fileName);
  scene.camera().setFieldOfView( json.getFloat("fov") );
  scene.camera().setUpVector( getVector(json, "upvector") );
  scene.camera().setViewDirection( getVector(json, "viewdirection") );
  scene.camera().setPosition( getVector(json, "position") );
}
//Parse a PVector from its string representation as an array of float in a JSON object
PVector getVector(JSONObject json, String attributeName) {
  String o =  json.getString(attributeName);
  String[] arr = o.substring(1, o.length()-1).split(", ");
  float[] f = new float[arr.length];
  int i = 0;
  for (String s : arr) {
    f[i] = Float.parseFloat(s);
    i++;
  }
  return new PVector(f[0], f[1], f[2]);
}

"Arrays" isn't pointing to anything, and I'm not sure what to do about it?

G4P: GTextField slow input

$
0
0

Hello! I'm having some performance issues with GTextField while using larger font size. While entering text in GTextField that goes past the edge of the field, the input dramatically slows down.

I'm experiencing this on Windows 7 64, Processing 3.1.1 and G4P 4.0.4. Do others experience this as well?

If it is general issue, I have few more observations. Curiously the field is fast when deleting characters with backspace. ControlP5 is also fast, but it doesn't have other features like copy/pasting.

Thank you!

import g4p_controls.*;
import java.awt.Font;

GTextField txf1;
int fontSize = 80;

void setup() {
  size(800, 600);

  txf1 = new GTextField(this, 100, 250, 600, 100);
  txf1.setFont(new Font("Arial", Font.PLAIN, fontSize));
}

void draw() {
  background(125);
}

How to switch between two videos and how to determine which video is visualized (active for image())

$
0
0

Hi everyone. I'm very new with Processing and I really need a help! I have two videos and I want to switch between them, choosing which video is shown. The two videos will always be in play, never stop or pause. They're playing at the same time and I can choose which visualize. Is there someone that can help me? Please, I'm very desperate because I have an exam and I'm so confused in using the code! Tks in advance. Here is the initial code that I try to do.

import processing.video.*;

Movie Mov1; Movie Mov2;

void setup(){ size(1280,720); Mov1 = new Movie(this, "nemo_bimbo.mp4"); Mov2 = new Movie(this, "adelaide_bimba.mp4"); }

void draw(){ image(Mov1, 840,0, 440,720); image(Mov2, 0,0, 440,720); }

void movieEvent(Movie m) { m.read(); }

void keyPressed() {

//I don't know how to use switch (key) //case 'a' //PLEASE, help me!

if (key == 'a') {
  Mov1.play();
 }
if (key == 's') {
  Mov2.play();
}

//I don't know how to not visualize anymore the video

if (key == ' ') {

Mov1.stop (); Mov2.stop(); } }

fog in p3d /opengl

$
0
0

hi
I want to add fog to my 3d-game.
I found this library for fog: www.hardcorepawn.com/fog/
I added it to the libraries, but when i run it, i get errors.
(with p3d and opengl)
I think it is to old.

Is there an other option?

IPcapture vs PImage.get()

$
0
0

Hi,

I'm using IPcapture with multiple MJPEG streams and buffering frames over time in as many circular buffers as there are cameras. The goal is to then display the streams with a delay.

I can currently connect to the cameras and display their original streams without any problem and my circular buffers also seem to be working properly but something strange happens when I try to copy the frames to the buffers. Here is a simplified version of my sketch which displays the problem:

Note: screen resolution is 1920x1080 and the camera stream is 480x300

import ipcapture.*;

IPCapture cam1;

// Circular buffer
PImage[] cam1_buffer;

// These determine the size of the circular buffer and help iterate while writing and reading through them
int nFrames = 60;
int iWrite = 0, iRead = 1;

void setup() {
  fullScreen();

  cam1 = new IPCapture(this, "http://192.168.0.172/axis-cgi/mjpg/video.cgi", "root", "root");
  cam1.start();

  cam1_buffer = new PImage[nFrames];
}

void draw() {
   if (cam1.isAvailable()) {
    cam1.read();
    cam1_buffer[iWrite] = cam1.get();
    if(cam1_buffer[0] != null){
      // The original camera image is displayed correctly at 480x300 in the upper left corner
      image(cam1,0,0);
      // The buffered frame appears under the original one but it is 1920x1080 and contains 4 stretched copies of the original side by side at the top of the image
      image(cam1_buffer[0],0,300);
    }
  }

 // Increment write and read counters
  iWrite++;
  iRead++;

  // Start writing over our buffer every time we reach the end
  if(iRead >= nFrames-1){
    iRead = 0;
  }

  if(iWrite >= nFrames-1){
    iWrite = 0;
  }
}

My understanding is that IPcapture.get(); returns PImages which are the same size as the screen and not the same size as the IPcapture resolution. I haven't found anything in the IPcapture reference on how to force the proper resolution.

Any ideas?

Also is there a way of getting the current stream frame rate? This would allow me to better tune the display delay as the stream frame rate varies over time...

Thanks!

Problem with toggling between multiple videos on processing 2.2.1

$
0
0

hi,
so im quite a noob, just clearing that.
im working on a project which involves switching between multiple videos on processing when i press a key. im trying to do some initial testing right now, and trying to get processing to switch between two videos on keypress. the video switches alright, but i cant see it in the applet window! the previous video ( in the paused state continues to stay on top ) . how do i fix this?? please help. find the code below -

    import processing.video.*;

    Movie theMov;
    Movie theMov2;



    void setup() {
      size(1440, 900);
      theMov = new Movie(this, "sin_city.mp4");
      theMov2 = new Movie(this, "blue_velvet.mp4");

      theMov.stop();
      theMov2.stop();
    }

    void draw() {
      background (0);
      image(theMov, 0, 0);
      image(theMov2, 0, 0);
    }

    void movieEvent(Movie m) {
      m.read();
    }

    void keyPressed(){
      if (key == 'p'){

            theMov2.pause();
            background(0);
            theMov.play();

              } else if (key == 'o'){

                       theMov.pause();
                       background(0);
                       theMov2.play();

                     }


}

Hello, in the code below I try to give a random value to textSize in the sphere class, but it copies

switching video doesn't work! help me, please!

$
0
0

Hi! Could you have a look at my code? I used current movie to switch two video (NemoBimboMovie and NemoShapeMovie, but they don't switch. I don't know why. Pleeeease, help me!

import processing.core.*;
import processing.data.*;
import processing.event.*;
import processing.opengl.*;

import processing.video.*;
import processing.sound.*;

import java.util.HashMap;
import java.util.ArrayList;
import java.io.File;
import java.io.BufferedReader;
import java.io.PrintWriter;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.IOException;

public class prova_di_oggi extends PApplet {

PImage file;

SoundFile sound1;
SoundFile sound2;

Movie NemoBimboMovie;
Movie NemoShapeMovie;
Movie AdelaideBimbaMovie;
Movie FirstInteractionMovie;

Movie CurrentMovie;

public void setup(){

    background (0);
    file = loadImage("/Users/MASC/Desktop/prova_di_oggi/data/rettangolo.jpg");
    sound1 = new SoundFile(this, "/Users/MASC/Desktop/prova_di_oggi/data/heartbeat.mp3");

    sound2 = new SoundFile(this, "/Users/MASC/Desktop/prova_di_oggi/data/nemo_invito.WAV");
    NemoBimboMovie = new Movie(this, "/Users/MASC/Desktop/prova_di_oggi/data/nemo_bimbo.mp4");
    NemoShapeMovie = new Movie(this, "/Users/MASC/Desktop/prova_di_oggi/data/nemo_shape.mp4");
    AdelaideBimbaMovie = new Movie(this, "/Users/MASC/Desktop/prova_di_oggi/data/adelaide_bimba.mp4");
    FirstInteractionMovie = new Movie(this, "/Users/MASC/Desktop/prova_di_oggi/data/firstinteraction.mp4");

    //In CurrentMovie tengo traccia di quale video é in play al momento.
    //Il promo dei due video che penso tu voglia far partire sia NemoShapeMovie
    CurrentMovie=NemoShapeMovie;
  }

public void draw(){
  image(NemoBimboMovie, 839,0, 600,854);
  image(NemoShapeMovie, 839,0, 600,854);
  image(AdelaideBimbaMovie, 0,0, 600,854);
  image(FirstInteractionMovie, 839,0, 600,854);
}

public void movieEvent(Movie m) {
  m.read();
}

public void keyPressed() {
  if (key == '1') {
    sound1.play();
    image (file,839,0,600, 854);
  }
  if (key == '2') {
    sound2.play();
  }
  else if (key == '3') {
    sound1.stop();
    sound2.stop();
  }
  //Assumo che 'a' significhi che l'utente apra l'altra scatola e debba partire l'altro video
  else if (key == 'a') {
    double CurrentTime=CurrentMovie.time();//Ottengo a che punto sta il video corrente
    CurrentMovie=NemoBimboMovie;
    CurrentMovie.jump((float)CurrentTime);
    }
  //Assumo che 's' significhi quando l'utente si avvicina...Parte il video con la figura
  else if (key == 's') {
      double CurrentTime=CurrentMovie.time();//Ottengo a che punto sta il video corrente
      CurrentMovie=NemoShapeMovie;
      CurrentMovie.jump((float)CurrentTime);
    }
  else if (key == 'p') {
      NemoBimboMovie.play();
    }
  else if (key == 'd') {
      AdelaideBimbaMovie.play();
    }
  else if (key == 'f') {
      FirstInteractionMovie.play();
    }
}

public void settings() {  size(1439,854); }

static public void main(String[] passedArgs) {
    String[] appletArgs = new String[] { "--present", "--window-color=#666666", "--stop-color=#cccccc", "prova_di_oggi" };
    if (passedArgs != null) {
      PApplet.main(concat(appletArgs, passedArgs));
    } else {
      PApplet.main(appletArgs);
    }
  }
}

Error message when using Minim

$
0
0

Hello,

I'm writing a program that has a sound effect play when an image pops up on screen. The code appears to be working but I get the following error message in my command window at the bottom of the IDE.

==== JavaSound Minim Error ====
==== Don't know the ID3 code PRIV

What exactly is causing this error? Here is my code. Thanks in advance for any help.

import java.io.*;
import ddf.minim.*;

Minim minim;
AudioPlayer tick;

String codeword;
PImage [] bground;

int i = 0;  //This variabe will be used to change the background images
int k = 0;  //This variable loads in the different images
int n = 4;  //The number of images for the background that we need
long bgroundtimer = 0;

void setup(){
  size(700, 500);
  background(0);
  bground = new PImage[n];
  for(k = 0; k < bground.length; k++){
    bground[k] = loadImage("C:/Users/Steven/Documents/Processing/My Saves/Word_Clock_Final/Background/bground" + k + ".png");
  }
  minim = new Minim(this);
  tick = minim.loadFile("C:/Steven/Processing/Tick.mp3");
}

void draw(){
   if(i < bground.length){
     image(bground[i], 0, 0);
     if(millis() - bgroundtimer >= 5000){
       bgroundtimer = millis();
       tick.play();
       tick.rewind();
       i++;
    }
  }
}

Weather Data Library - Missing Links

$
0
0

Hi all,

I'm looking for a library or piece of code that can link processing to weather data, either real time, or in the form of the standard typical meteorological year (TMY) of EnergyPlus.

Any ideas of existing implementations? I spotted a Yaoo weather library by onformative on processing.org but the link is broken.

Any help is greatly appreciated.

Processing and Threads

$
0
0

I've got the following little threaded application that records some sound until the user releases the mouse button:

import javax.sound.sampled.*;
import java.io.*;
import java.net.URLDecoder;
import java.net.URLEncoder;
import http.requests.*;
import java.util.concurrent.atomic.AtomicInteger;

final JavaSoundRecorder recorder = new JavaSoundRecorder();
Thread thread;

boolean recording = false;

void setup()
{
}

void draw()
{
  //println(rr.getStop());
  print(frameCount + " ");
}

void mouseReleased()
{
  println(" released ");
  recording = false;
}

void mousePressed()
{
  println(" started ");
  recording = true;

  thread = new Thread(new Runnable() {
   public void run() {
     try {
       Thread.sleep(50); // this is fine if I just timeout and bail
       Thread.yield();
       if (!recording) {
         println(" not recording ");
         recorder.finish();
         thread.join();
       }
     }
     catch (InterruptedException ex) {
       ex.printStackTrace();
     }
   }
  });

  thread.start();
  // start recording
  recorder.start("test.wav");

  println(" OK " );
}

Works perfectly if I just time out the thread with a fixed time amount but I can't get the thread to listen to any input from the user. The sketch completely stops running the main p5 thread; no draw(), no mouseReleased(), etc. This is, I'm guessing, because my little recorder class is creating a second thread that starves the main P5 thread:

    public class JavaSoundRecorder {

      long length;

      // format of audio file
      AudioFileFormat.Type fileType = AudioFileFormat.Type.WAVE;

      // the line from which audio data is captured
      TargetDataLine line;

      /**
       * Defines an audio format
       */
      AudioFormat getAudioFormat() {
          float sampleRate = 8000;
          int sampleSizeInBits = 8;
          int channels = 2;
          boolean signed = true;
          boolean bigEndian = true;
          AudioFormat format = new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
          return format;
      }

      /**
       * Captures the sound and record into a WAV file
       */
      void start( String path) {

          String finalPath = sketchPath() + "/" + path;

          File wavFile = new File(finalPath);

          System.out.println(finalPath);

          try {
              AudioFormat format = getAudioFormat();
              DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);

              // checks if system supports the data line
              if (!AudioSystem.isLineSupported(info)) {
                  System.out.println("Line not supported");
                  System.exit(0);
              }
              line = (TargetDataLine) AudioSystem.getLine(info);
              line.open(format);
              line.start();   // start capturing

              println("Start capturing...");

              AudioInputStream ais = new AudioInputStream(line);

              length = ais.getFrameLength() * ais.getFormat().getFrameSize();

              println("Start recording...");

              // start recording
              AudioSystem.write(ais, fileType, wavFile);

          } catch (LineUnavailableException ex) {
              ex.printStackTrace();
          } catch (IOException ioe) {
              ioe.printStackTrace();
          }
      }

      /**
       * Closes the target data line to finish capturing and recording
       */
      void finish() {
          println("Finished");
          line.stop();
          line.close();
      }

    }

I'm used to C++ threads and don't really understand what's going on under the hood of the JVM threading, can anyone enlighten me as to how to get my AudioInputStream based class to play nice with the main p5 thread?

Viewing all 2896 articles
Browse latest View live