Hello All.

I am having 2 problems here with my code. I will start off with the video code first. The video code should capture 20 seconds of the screen at 50fps, however, when I generate it, it generates a 1 minute 7 second video, and I am not sure why. The math adds up. I have tried lowering the FPS, but the time is still longer than 20 seconds. The is as follows:

import java.awt.*;
import java.awt.image.BufferedImage;
import java.util.concurrent.TimeUnit;

import com.xuggle.mediatool.IMediaWriter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.xuggler.ICodec;

public class TestScreenRecord {

    private static final double FRAME_RATE = 50;
    private static final int SECS_TO_RUN_FOR = 20;
    private static final String outputFilename = "C:/Users/Adam/Programs/Playhouse/SR/Test.mp4";
    private static Dimension screenBounds = Toolkit.getDefaultToolkit().getScreenSize();


    public static void main(String[] args) {
        // make IMediaWriter write file to location
        final IMediaWriter writer = ToolFactory.makeWriter(outputFilename);

        // add 1 video stream at position 0 and fixed frame rate of framerate
        writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_MPEG4, screenBounds.width, screenBounds.height);

        long startTime = System.nanoTime();

        for (int i = 0;  i < SECS_TO_RUN_FOR * FRAME_RATE; i++){
            //take screenshot
            BufferedImage screen = getScreenshot();

            //convert image to right type
            BufferedImage bgrScreen = convertToType(screen, BufferedImage.TYPE_4BYTE_ABGR);

            //encode image to stream
            writer.encodeVideo(0, bgrScreen, System.nanoTime() - startTime, TimeUnit.NANOSECONDS);

            try{
                Thread.sleep((long)(1000/FRAME_RATE));
            }catch(InterruptedException e){
                e.printStackTrace();
            }
        }
        writer.close();
    }

    public static BufferedImage convertToType(BufferedImage sourceImg, int targetType){
        BufferedImage img;

        if (sourceImg.getType() == targetType){
            img = sourceImg;
        }else{
            img = new BufferedImage(sourceImg.getWidth(), sourceImg.getHeight(), targetType);
            img.getGraphics().drawImage(sourceImg, 0, 0, null);
        }
        return img;
    }

    private static BufferedImage getScreenshot(){
        try{
            Robot robot = new Robot();
            Rectangle capSize = new Rectangle(screenBounds);
            return robot.createScreenCapture(capSize);          
        }catch(AWTException e){
            e.printStackTrace();
            return null;
        }
    }   
}

Now, The second problem is with the audio code. For the video code, I had a tutorial that I followed, but for the audio code, I just tried figuring it out using docs and some sample code, and the video code (for using Xuggler to export the audio). I can successfully create a file, however there is no audio when I play the file I created. I did print the values in the audioSamples array, and they have actual values that change (Based off the microphone it seems like (Note: I would like it to be what comes out of the speaker, but that is a separate issue)). The for loop in the code just uses an abitrary number 200 just to test if the code works. The comments are my understanding of what the code does. It may not be completely accurate. The audio recording code is as follows:

import java.io.File;
import java.io.IOException;
import java.util.concurrent.TimeUnit;

import javax.sound.sampled.*;

import com.xuggle.mediatool.IMediaWriter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.xuggler.ICodec;

public class TestSpeakerRecord {

    static final long RECORD_TIME = 60000;

    File wavFile = new File ("SampleAudioRecord.wav");
    private static final float SAMPLE_RATE = 44100.0f;
    private static final int NUM_CHANNELS = 2;

    private static final String outputFilename = "C:/Users/Adam/Programs/Playhouse/SR/SampleAudioRecord.mp3";

    public TestSpeakerRecord() throws LineUnavailableException{
        //creates specifications for audio input
        AudioFormat audioFormat = new AudioFormat(SAMPLE_RATE, 16, NUM_CHANNELS, true, false);
        // creates dataline info specifying the audioformat
        DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, audioFormat);
        // creates an audio line using the specifications
        TargetDataLine line = (TargetDataLine)AudioSystem.getLine(dataLineInfo);
        //sets up the writer
        final IMediaWriter writer = ToolFactory.makeWriter(outputFilename);
        // creates an audiostream using the same specs
        writer.addAudioStream(0, 1, ICodec.ID.CODEC_ID_MP3, NUM_CHANNELS, (int)SAMPLE_RATE);

        //gets start time
        long startTime = System.nanoTime();
        // opens and starts the line
        line.open(audioFormat, line.getBufferSize());
        line.start();





        for (int i = 0; i < 200; i++){
            byte[] audioBytes = new byte[line.getBufferSize()/2];
            int numBytesRead = 0;
            numBytesRead = line.read(audioBytes, 0, audioBytes.length);

            // convert to signed shorts representing samples
            int numSamplesRead = numBytesRead/2;
            short[] audioSamples = new short[numSamplesRead];
            if (audioFormat.isBigEndian()){
                for (int j = 0; j < numSamplesRead; j++){
                    audioSamples[i] = (short)((audioBytes[2 * j] << 8) | audioBytes[2 * j + 1]);
                    //System.out.print(audioSamples[i] + ", ");
                }
            }else{
                for (int j = 0; j < numSamplesRead; j++){
                    audioSamples[i] = (short)((audioBytes[2 * j + 1] << 8) | audioBytes[2 * j]);
                    //System.out.print(audioSamples[i] + ", ");
                }
            }
            try {
                Thread.sleep((long)(1000/SAMPLE_RATE));
            } catch (InterruptedException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }

            //System.out.println();
            writer.encodeAudio(0, audioSamples, System.nanoTime() - startTime, null/*TimeUnit.NANOSECONDS*/);
        }

        writer.close();
    }

    public static void main(String args[]){
        try {
            new TestSpeakerRecord();
        } catch (LineUnavailableException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

}

If anyone could provide any information on why the video is longer than it should be, or why my audio file is blank when it is outputted, it would be much appreciated. Also, if you have any idea how to record the speakers, then that information is also much appreciated.

Note: In order to export the video, you need to make a blank video file to write to. I will add that part to the code later.

Thank you for any help!

Recommended Answers

All 6 Replies

Maybe:
You have video code like

loop
   do CPU intensive stuff
   wait 20 mSec

that's only going to loop 50 times per second if the CPU stuff is instantaneous (which stuff like yours will definitely not be)
At the very least you need something more like

loop
   endTime = time now + 20 mSec
   do CPU intensive stuff
   waitTime = endTime - time now
   wait (waitTime)

My guess is that the screen cap/encode/write code takes just over 40mSec on its own, taking your total time up to just over the minute.
Anyway, a couple of quick print statements for the nano time before amd after the screen cap/encode/write will confirm it either way.

As for the audio - maybe 200 just gives you a sound too short for a human to notice? Can you count the actual number of samples written / 44100 ?

I ran changed the video code thinking about your suggestion. I came up with a way that lowers the time. I set it to 10 seconds to compare, and the file produces an 8 second video. However, the way I solved it uses a variable frame rate, as I base it on how many times I could cycle the code based on the speed of the proccess. I checked, and taking the screenshot itself takes about 25ms, and doing the converting and writing along with the screenshot generally takes around 40ms on my computer. Would using multiple threads to take the screenshot and then pass it to the writer be more accurate in terms of timing?

I also looked at the audio file. The number of samples taken was 4410000. I placed the genereated mp3 file into audacity, and the only spikes I saw were tiny, and equally spaced. I am assuming that is is where each segment starts. I also forgot to mention that the mp3 file generated is 50 seconds long, so the length shouldn't be the issue.

Here is the new video capture code.

import java.awt.*;
import java.awt.image.BufferedImage;
import java.util.concurrent.TimeUnit;

import com.xuggle.mediatool.IMediaWriter;
import com.xuggle.mediatool.ToolFactory;
import com.xuggle.xuggler.ICodec;

public class TestScreenRecord {

    private static final double FRAME_RATE = 13;
    private static final int SECS_TO_RUN_FOR = 10;
    private static final String outputFilename = "C:/Users/Adam/Programs/Playhouse/SR/Test.mp4";
    private static Dimension screenBounds = Toolkit.getDefaultToolkit().getScreenSize();

    private static Robot robot;
    private static Rectangle capSize = new Rectangle(screenBounds);
    public static void main(String[] args) {
        // make IMediaWriter write file to location
        final IMediaWriter writer = ToolFactory.makeWriter(outputFilename);
        try {
            robot = new Robot();
        } catch (AWTException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
            System.exit(0);
        }
        // add 1 video stream at position 0 and fixed frame rate of framerate
        writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_MPEG4, screenBounds.width, screenBounds.height);

        long startTime = System.nanoTime();

        for (int i = 0;  i < SECS_TO_RUN_FOR * FRAME_RATE; i++){
            System.out.println("Start: " + System.currentTimeMillis());
            long procStart = System.currentTimeMillis();
            long endTime = startTime + (long)(1000/FRAME_RATE);
            //take screenshot

            BufferedImage screen = getScreenshot();
            //System.out.println("End: " + System.nanoTime());
            //convert image to right type
            BufferedImage bgrScreen = convertToType(screen, BufferedImage.TYPE_3BYTE_BGR);

            //encode image to stream
            writer.encodeVideo(0, bgrScreen, System.nanoTime() - endTime, TimeUnit.NANOSECONDS);
            System.out.println("End: " + System.currentTimeMillis());
            startTime = endTime;
            try{
                Thread.sleep((long)((1000/(System.currentTimeMillis() - procStart))));
            }catch(InterruptedException e){
                e.printStackTrace();
            }
        }
        writer.close();
    }

    public static BufferedImage convertToType(BufferedImage sourceImg, int targetType){
        BufferedImage img;

        if (sourceImg.getType() == targetType){
            img = sourceImg;
        }else{
            img = new BufferedImage(sourceImg.getWidth(), sourceImg.getHeight(), targetType);
            img.getGraphics().drawImage(sourceImg, 0, 0, null);
        }
        return img;
    }

    private static BufferedImage getScreenshot(){
        return robot.createScreenCapture(capSize);
    }   
}

Thank you for your help!

Why don't you try threads? Ignore this comment.

In your video code, what is the need Thread.sleep((long)(1000/SAMPLE_RATE)); and did u try reducing the sleep time?

Re video: just what I suspected. You may find that threading the screen shot and the writing may buy you a little improvement - it's not hard, so give it a try. But I doubt that you will get much from it. Realistically you will have to accept a frame rate that your computer is capable of maintaining- which looks like maybe 24fps? (good enough for the cinema!)

Re sound: sorry, no ideas.

@newcoder310 as you can see in the second video code that I uploaded, the the sleep is based off the rate of execution rather than the user set fps. Even if I reduce the sleep time, the execution time cannot change, therefore the minimum fps I can get is the 1 second divided by the speed of the execution.

@JamesCherrill I will try the threading a bit later. For now I will just focus on getting the sound working. Thanks for all of your help!

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.