[Fluxus] time while rendering

Rob Couto dbtx11 at gmail.com
Sat Jun 2 13:36:19 PDT 2012


On 6/2/12, Kassen <signal.automatique at gmail.com> wrote:
> This sounds great! So that position would be like a special kind of
> time-keeping particlar to this purpose?

Yes, it's just time in seconds. Where (time) is seconds elapsed since
fluxus was started, (process-time) is seconds elapsed in the audio
file.

> Yes, I too think flxtime and time are the same. I think that that is
> just because it all grew organicly.

I suggested in the descriptions that you can quickly tweak a script
for processing with (define (time) (process-time)) and (define (delta)
(process-delta)) sometime after (clear) and before calling (time) or
(delta), so if it's the same and you're using (flxtime) then that's
all good and I don't have to add anything. Only, didn't someone call
this defactoring? Replacing functionality with documentation? I said
*sort of* fixed because I was already wondering whether it's better to
have the engine check AudioCollector::IsProcessing() and if it's true,
replace the values returned by (time) and (delta) and of course
(flxtime), getting the wav time from AudioCollector-- transparently so
it all just works. One reason I hesitate is that the engine is, as far
as I can tell, completely ignorant of the audio module, and that might
be a good thing. On the other hand, it may be why the AudioCollector
never gets destroyed... at least not for me. Even when running from
master, I always had to ctrl+c in the terminal because the jack client
never closed, and you'll laugh when you see how I temporarily
"handled" that. It might only be my machines' weirdness.

The other reason I hesitate is that (process) is not used nearly as
often as (time) is called, so it seems that (define (time)
(process-time)) is more efficient, though not more convenient. But if
you're doing what I did while testing, you start with a working script
and already have to add a few lines such as (start-framedump) so
adding another few lines temporarily to make it work seems acceptable.

On the other other hand, the reason it could be a good thing-- having
nothing to do with convenience-- is that hooking the engine into
processing time would allow you to play back recorded keys while
playing a wav that you had recorded from jack while you typed them.
Then it would also make sense to have the keypress recorder optionally
trigger jack recording right in fluxus, wait for jack to be ready, and
start writing the key tape and the wav simultaneously. I think I could
do this. libsndfile is already there :) One use for that is like what
you said, you could be performing live at say, 1024x640 on a netbook,
and then do a full HD render afterward, including the live sound-- and
it would be always synced to the sound AND keys, if that mattered.
With fluxa in the mix, it could matter a lot. I would probably never
use it, but if anyone thinks they would, it's possible.

> This is really interesting! Are you using a separate repository
> because you were unsure at all when you started?

Thanks :) it's separate just because it seemed clean, I guess.
Apparently gitorious can't show project pages if they don't have a
master branch, so I have a copy of that too. The dbtx branch is for
all the changes I like, right now just the jack branch plus some
renderer tweaks that I talked about way back, for alternate blending
when using blur and making blur work with (clip).

> As I see it (being in a quite similar position) hacking on Fluxus is a
> LOT more fun than a "tach yourself C++" book or formal lessons. I made
> mistakes, you probably will too. Then you fix them, learned something
> and it's all a little better again.

C++ for Dummies came with a Deluxe Compiler Kit, which was just a
crippled MSVC++ 6.0 that would throw a message box warning you about
not distributing any program you compiled, every single time your
winamp plugin was enumerated or started, or every time you launched
zsnes until the directx 8.1 sdk stopped linking correctly... That was
in 2000 and it did NOT get me started, oh if only I'd known... now I
know that GCC's error messages are usually enlightening and I am
definitely having fun.

> Well, for the reasons mentioned yesterday I would really appreciate
> some functionality to help with rendering. I know right now desktop
> grabs are what most prefer (with good reason!) but there is something
> to be said for permitting complexity that would otherwise be
> impossible on a given machine and for exact frame timing when we might
> work with others who might use more traditional video applications.

I really hacked up (process). Now instead of getting the whole wav
into memory and taking buffers of uncommon length determined by
(start-audio), it keeps the wav open and in every frame it seeks to
the right time then reads only the right amount. This is a lot less
since with a 44100Hz wav and the usual(?) buffer size of 256, there
are about 172 frames worth and you're probably taking 30 or 60. It
also works with any specified buffer size, overlapping automatically
when it's a larger fraction of a second than the frame time. Of course
that probably only happens for me when I'm playing with 16384-point
FFTs, which I meant to show off, but I can't remember how I got
mencoder to work a while back. If it sounds like a good idea, I can
slice it out, replace the mix-down to mono, and make it a patch for
fluxus-audio in master along with the time stuff.

>> Rather than rewrite fluxus-audio, and since I meant to put in lots of
>> jack client commands and midi support, I copied lots of fluxus-audio
>> into a new module, fluxus-jack. I wrote the docs for everything, but
>> can't translate so wherever a function was changed so that I had to
>> change the description, I opted to wipe out the non-English blocks.
>> I'm going to get it running in windows, too... wish me luck :)
>
> That sounds very sensible, the Windows thing should make some people
> quite happy and I wish you the best of luck. This sounds great!

I should have edited that-- I didn't "copy lots of fluxus-audio", I
copied it wholesale and started rewriting the copy after I worked out
how to get fluxus to build and run a new module so it would be
possible to choose one at compile time. Also that means I didn't
"write docs for everything", I meant "wrote docs for everything I
added" and rewrote docs for what needed changing, like (gh n) still
works but now that's short for (gh n 0), and (gh n 1) is the second
port, etc. I even wrote docs for things that don't work, which is
probably a crime-- so ignore the (add-jack-audioin) and
(add-jack-midiin), those are based on a desire to change the number of
ports without using (stop-audio-fft) and starting it again. They are
only there waiting for me to mangle the JackClient some more, with
apologies to Dave. The best hierarchy I can think of is this: objects
like the AudioCollector and aubio and the possible builtin wav
recorder all call JackClient with the number and name of the port(s)
on another jack-ified program that they want to listen to, then
JackClient takes care of the fluxus input port(s) getting created and
connected until the FFT/aubio/whatever object is destroyed, so
ultimately people *could* do port management in scheme if they wanted
(and could even leave out qjackctl), but they wouldn't be forced to.
If anyone has any thoughts on this, I'd like to hear them-- and it's
probably best to start a new thread.

Sorry for jacking your thread, and thanks again :)

-- 
Rob



More information about the Fluxus mailing list