refactoring the feedFrame/renderFrame/getTexture flow to make a more
clever use of memory and autorelease pools
started implementing CVFilter
(on the way of having CoreImage filters usable from javascript)
- merged some local changes (avoid using deprecated APIs)
- the xcode project has been civilized
(stuff was messed up a bit by the flowmixer branch and new files were in
weird locations)
NOTE:
this is still a work in progress and all functionalities still need
testing. Something could have been messed up by mergin my local changes
with flowmixer branch
- Separating view logic from the real layer implementations
(so layers can be created programmatically and associated to a
view only if/when necessary)
- Separating the c++ glue classes from their related cocoa implementations
It Can be changed through the preferences panel (WIP)
(started implementation for a screen-keyboard-listener
to capture keystrokes acting like cocoa-based keyboard
controller)
- per-layer filter parameters are now restored correctly when moving the
filterpanel between different layers
- Various fixes and improvements in CVF0rLayer as well.
- The FilterPanel now doesn't disappear when selecting a new filter.
- Introduced the preliminary logic necessary to access geometry layers
created through javascript
- correct framerate calculation
- switching to a default screensize of 512x384 instead of 400x300
- reorganizing gui controls
- introduced icons for both the capture and generator layers
- saving/restoring per-layer state of both filter and image parameters
- FilterPanel now updates correctly with layer-specific parameters
- better initialization of the videocapture device in CVGrabber
- (something else I don't remember right now)