Sign Up for Free

RunKit +

Try any Node.js package right in your browser

This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including node-core-audio with all npm packages installed. Try it out:

var nodeCoreAudio = require("node-core-audio")

This service is provided by RunKit and is not affiliated with npm, Inc or the package authors.

node-core-audio v0.5.1

Core native node.js audio functionality, including sound card access and audio streaming

Node Core Audio

alt tag

A C++ extension for node.js that gives javascript access to audio buffers and basic audio processing functionality

Right now, it's basically a node.js binding for PortAudio.

NOTE: Looking for help maintaining this repository!

Active contributors:


npm install node-core-audio

Basic Usage

Below is the most basic use of the audio engine. We create a new instance of node-core-audio, and then give it our processing function. The audio engine will call the audio callback whenever it needs an output buffer to send to the sound card.

// Create a new instance of node-core-audio
var coreAudio = require("node-core-audio");

// Create a new audio engine
var engine = coreAudio.createNewAudioEngine();

// Add an audio processing callback
// This function accepts an input buffer coming from the sound card,
// and returns an ourput buffer to be sent to your speakers.
// Note: This function must return an output buffer
function processAudio( inputBuffer ) {
    console.log( "%d channels", inputBuffer.length );
    console.log( "Channel 0 has %d samples", inputBuffer[0].length );

    return inputBuffer;

engine.addAudioCallback( processAudio );

// Alternatively, you can read/write samples to the sound card manually

var engine = coreAudio.createNewAudioEngine();

// Grab a buffer
var buffer =;

// Silence the 0th channel
for( var iSample=0; iSample<inputBuffer[0].length; ++iSample )
    buffer[0][iSample] = 0.0;

// Send the buffer back to the sound card
engine.write( buffer );

Important! Processing Thread

When you are writing code inside of your audio callback, you are operating on the processing thread of the application. This high priority environment means you should try to think about performance as much as possible. Allocations and other complex operations are possible, but dangerous.


The basic principle is that you should have everything ready to go before you enter the processing function. Buffers, objects, and functions should be created in a constructor or static function outside of the audio callback whenever possible. The examples in this readme are not necessarily good practice as far as performance is concerned.

The callback is only called if all buffers has been processed by the soundcard.

Audio Engine Options

  • sampleRate [default 44100]
    • Sample rate - number of samples per second in the audio stream
  • sampleFormat [default sampleFormatFloat32]
    • Bit depth - Number of bits used to represent sample values
    • formats are sampleFormatFloat32, sampleFormatInt32, sampleFormatInt24, sampleFormatInt16, sampleFormatInt8, sampleFormatUInt8.
  • framesPerBuffer [default 256]
    • Buffer length - Number of samples per buffer
  • interleaved [default false]
    • Interleaved / Deinterleaved - determines whether samples are given to you as a two dimensional array (buffer[channel][sample]) (deinterleaved) or one buffer with samples from alternating channels (interleaved).
  • inputChannels [default 2]
    • Input channels - number of input channels
  • outputChannels [default 2]
    • Output channels - number of output channels
  • inputDevice [default to Pa_GetDefaultInputDevice]
    • Input device - id of the input device
  • outputDevice [default to Pa_GetDefaultOutputDevice]
    • Output device - id of the output device


First things first

var coreAudio = require("node-core-audio");

Create and audio processing function

function processAudio( inputBuffer ) {
    // Just print the value of the first sample on the left channel
    console.log( inputBuffer[0][0] );

Initialize the audio engine and setup the processing loop

var engine = coreAudio.createNewAudioEngine();

engine.addAudioCallback( processAudio );

General functionality

// Returns whether the audio engine is active
bool engine.isActive();

// Updates the parameters and restarts the engine. All keys from getOptions() are available.
    inputChannels: 2

// Returns all parameters
array engine.getOptions();

// Reads buffer of the input of the soundcard and returns as array.
// Note: this is a blocking call, don't take too long!

// Writes the buffer to the output of the soundcard. Returns false if underflowed.
// notic: blocking i/o
bool engine.write(array input);

// Returns the name of a given device
string engine.getDeviceName( int inputDeviceIndex );

// Returns the total number of audio devices
int engine.getNumDevices();

Known Issues / TODO

  • Add FFTW to C++ extension, so you can get fast FFT's from javascript, and also register for the FFT of incoming audio, rather than the audio itself
  • Add support for streaming audio over sockets


MIT - See LICENSE file.

Copyright Mike Vegeto, 2013


RunKit is a free, in-browser JavaScript dev environment for prototyping Node.js code, with every npm package installed. Sign up to share your code.
Sign Up for Free