- moving filter-related code out of the layercontroller
by creating a Filter subclass : CVFilter
- making coreimage filters available to javascript when running on osx
Note : this is a work in progress and implementation is still unfinished
(coreimage filters won't work in this revision)
refactoring the feedFrame/renderFrame/getTexture flow to make a more
clever use of memory and autorelease pools
started implementing CVFilter
(on the way of having CoreImage filters usable from javascript)
- Separating view logic from the real layer implementations
(so layers can be created programmatically and associated to a
view only if/when necessary)
- Separating the c++ glue classes from their related cocoa implementations
- per-layer filter parameters are now restored correctly when moving the
filterpanel between different layers
- Various fixes and improvements in CVF0rLayer as well.
- The FilterPanel now doesn't disappear when selecting a new filter.
- Introduced the preliminary logic necessary to access geometry layers
created through javascript
- correct framerate calculation
- switching to a default screensize of 512x384 instead of 400x300
- reorganizing gui controls
- introduced icons for both the capture and generator layers
- saving/restoring per-layer state of both filter and image parameters
- FilterPanel now updates correctly with layer-specific parameters
- better initialization of the videocapture device in CVGrabber
- (something else I don't remember right now)
- implemented a native Cocoa-OpenGL screen
- introduced CVLayer to be used as "binding class"
within all native-cocoa layers
- runloop is now ruled by the Screen which will call
Context::cafudda() only when the DisplayLink asks for
a new videoframe to render
- both CVideoFile and CVideoGrabber are now working properly
(effects are not available yet on the CVideoGrabber layer
but I'm going to extend filtering functionalities to
the videograbber asap)