Videoooawsomeness

3D tools

 rhino 5
maya
blender
z brush
sculptris ( free software from Zbrush makers    can export obj format)

  MODO,
  CINEMA 4D, and
  Blender all
  3D Coat,

smaller apps

  123D Sculpt.

  SculptGL   sculpt right on the web!!!


Stream

Windows media encoder.  Free. Broadcast love event. Specify video and audio source

Utility Spotlight Screenrecorder

https://technet.microsoft.com/en-us/magazine/2009.03.utilityspotlight2.aspx

Simple web page to detect the type of video supported:

<html><head><script>
function videoCheck() {
  alert("h264:"+supports_h264_baseline_video());
  alert("webm:"+supports_webm_video());
  alert("ogg:"+supports_ogg_theora_video());
}
function supports_video() {
  return !!document.createElement('video').canPlayType;
}
function supports_h264_baseline_video() {
  if (!supports_video()) { return false; }
  var v = document.createElement("video");
  return v.canPlayType('video/mp4; codecs="avc1.42E01E, mp4a.40.2"'); //probably, maybe or blank/no
}
function supports_ogg_theora_video() {
  if (!supports_video()) { return false; }
  var v = document.createElement("video");
  return v.canPlayType('video/ogg; codecs="theora, vorbis"'); //probably, maybe or blank/no
}
function supports_webm_video() {//newly opensourced
  if (!supports_video()) { return false; }
  var v = document.createElement("video");
  return v.canPlayType('video/webm; codecs="vp8, vorbis"'); //probably, maybe or blank/no
}
</script></head><body>
<div onclick="videoCheck();"> click me to see formats supported </div>
<body></html>

Download youtube videos

http://en.savefrom.net/

Screen Capture

Camtasia                https://www.techsmith.com/camtasia.html



Open Broadcaster Software  https://obsproject.com/

HyperCam  http://www.hyperionics.com/

Blender https://www.blender.org/

Lightworks https://www.lwks.com/

Pinnacle Studio

Adobe premier elements

OBS Tutor

Install/extract then run OBS.

create scenes:  Rt-click in 'Scenes' box.(red 1) -> 'Add Scene' -> Name it
to add hotkey to quickly switch between scenes:
  Rt-click a Scene -> 'Set Hotkey' -> Input the key(s) you wish to use.

add some sources to the scene
   Method 1. Adding sources individually to each scene.
   Method 2. Adding sources as a global function.

   Method 1) For individual;
1. Right-click on the 'Sources' box. (red 2)
2. Select which kind of Source you wish to add.
Software Capture Source is the OBS equivalent of Screen Region from XSplit,
Bitmap is to add an image file (overlay) and Video
Capture Device ...a webcam or screen capturer (eg Camtasia, etc...)
3. Name your source
   Method 2) For global, click on global sources, then click add. Repeat steps as above.

Now 'Edit' each Scene n source to adjust size/placement. (drag the rectangular red border)

'Show Preview' to see these scenes in effect, ...adjust as needed

settings menu:
General - Self-explanatory.
Encoding - Self-explanatory as well. bit rate to stream at.
For audio, I personally use AAC/128
Broadcast Settings - Unless only using OBS to locally record, the set to Live Stream.
Streaming service is self-explanatory, but if you are not using Twitch/Own3d,
you'll have to manually input the Channel Name/Stream Key/Server.
For those on Twitch, your Stream Key can be found on Twitch.
For server, select the one that provides the best ping for you.
Video - Your base resolution should be the resolution of the main monitor you are streaming.
to downscale resolution for streaming (ex, down to 720p), select it w/the dropdown.
FPS is self-explanatory. Aero should be disabled if you are planning to stream with the software capture option.
Audio - All self-explanatory.
Advanced - Do not touch unless you know what you are doing.

AVCHD

AVCHD (Advanced Video Coding High Definition) file-based format
 for the digital recording/camcording and playback of high-definition video.

1080-line
50-frame/s and
60-frame/s modes (AVCHD Progressive) and
stereoscopic video (AVCHD 3D).

The new video modes require double the data rate of previous modes.


For video compression, AVCHD uses the MPEG-4 AVC/H.264 standard,
  supporting a variety of
standard,
high definition, and
stereoscopic (3D) video resolutions.

For audio compression, it supports both Dolby AC-3 (Dolby Digital) and
uncompressed linear PCM audio. Stereo and multichannel surround (5.1) are both supported.

Aside from recorded audio and video, AVCHD includes many user-friendly features
to improve media presentation:
menu navigation,
simple slide shows and subtitles.

The menu navigation system is similar to DVD-video,
allowing access to individual videos from a common intro screen.
Slide shows are prepared from a sequence of AVC still frames,
and can be accompanied by a background audio track.

Subtitles are used in some camcorders to timestamp the recordings.

Audio, video, subtitle, and ancillary streams are multiplexed
into an MPEG transport stream and stored on media as binary files.
Usually, memory cards and HDDs use the FAT file system,
while optical discs employ UDF or ISO9660.


AVCHD and its logo are trademarks of Sony and Panasonic

first-last serv

OpenCV (actionable video!)

http://opencv.org/opencv-java-api.html

Python version used on Pi   http://simplecv.org/

https://code.google.com/p/javacv/wiki/OpenCV2_Cookbook_Examples

JavaCV (Scala based)
https://github.com/bytedeco/javacv-examples/tree/master/OpenCV2_Cookbook

https://mail.google.com/mail/u/0/#inbox/14c269f282b45fce?projector=1

BGR blue green red
HSR hue saturation value

filter colors of interest (min/max thresholds for h, s and r)
 then objects are easier to track

VideoCapture mycap object
mycap.open(location/pos)
mycap.setHeight/width
mycap.read
cvtcolor( ...convert from bgr to hsr colorspace
inRange(min,max)

then perform morph ops (zap remaining clutter/noise/whitespecs/noninteresting objects)
 via dilate and erode
erode noise into whitespace ...removing it from existence
dilate makes it bigger and makes interesting object more interesting (fills in gaps, shadows, etc)

so it makes the object more defined/black and white
 highly contrasted
   aka singled out!

get structuring element (aka the size of the rectangle you wish to watch/singleout)


then setup trackbars
 to track x/y of object

final step
 find vector contours (vector of vector of points)
    from binary image (all obj outlines found in it)

then get x/y coordinates of largest object (defined by its inner area)




http://www.pluralsight.com/courses/getting-started-opencv-net?utm_source=marketo&utm_medium=email&utm_campaign=Newsletter_15_0708&utm_content=course&mkt_tok=3RkMMJWWfF9wsRokuK3NZKXonjHpfsX66OsqWKa1lMI%2F0ER3fOvrPUfGjI4DTMtgI%2BSLDwEYGJlv6SgFTrTFMal3z7gNUhU%3D

Install OpenCV

http://opencv.org/downloads.html
 takes you to sourceforce
  http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.1.0/opencv-3.1.0.exe/download


get opencv-3.1.0.exe
  extract it
  put appropriate dll in your path
     C:\Users\MDockery\Downloads\opencv\build\java\x64\opencv_java310.dll
  put jar in classpath
     C:\Users\MDockery\Downloads\opencv\build\java\opencv-310.jar

Eclipse setup (dont forget to specify the native lib location)


Training/Classifiers

training commands (1 gen's vec/w pos/neg samples, for other one which gens xml classifier)


Pre-Training (create samples)

 opencv_create_samples 
  in parms :
     -img file as input (containing the object)
     -bg description file (pointing to various negative/background images)
   ...also specifiy number of samples to produce, max x/y/z angle rotation, window size, etc
   outputs:
      a vec binary file containing positive & neg/bg dataset training images 
            (or optionally -pngoutput files)


Train

the vec files is used as input for opencv_traincascade 
      (which in turn produces the cascade classifier xml file)

Detect

instantiate cascadeclassifier w/xml
 (it does regions)

convert to grayscale

detectmultiscale
  grayimg
  1.1 //scale factor "search window"
  10  //min obj to say a region is positive
  min size(20,20) //...
  max size 

then draw them

Sample OpenCV Threshold (binary/black_n_white threshold min/max vals)

package gui;





import java.awt.Font;

import java.awt.GridBagConstraints;

import java.awt.GridBagLayout;

import java.awt.image.BufferedImage;

import java.io.ByteArrayInputStream;

import java.io.IOException;

import java.io.InputStream;

import java.text.SimpleDateFormat;



import javax.imageio.ImageIO;

import javax.swing.ImageIcon;

import javax.swing.JFrame;

import javax.swing.JLabel;
import javax.swing.JSlider;
import javax.swing.event.ChangeEvent;
import javax.swing.event.ChangeListener;

import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.*;

/** v2

 * read picture

 * gui slider for threshold min/max

 * show result on screen

 */

public class ThresholdSliderShow extends JFrame{

JLabel imageLabel=new JLabel(), minMaxLabel, origLabel;
JSlider sliderMin, sliderMax;

Mat imageMatGray=new Mat();

Mat imageMatBlackAndWhite=new Mat();

public static void main(String[] args) {

    System.loadLibrary(Core.NATIVE_LIBRARY_NAME);

    ThresholdSliderShow frame=new ThresholdSliderShow();     

    String filename="please_wait.png";            

    Mat imageMatOrig=Imgcodecs.imread(filename);

Imgproc.cvtColor(imageMatOrig, frame.imageMatGray, Imgproc.COLOR_BGR2GRAY); //from ch6

frame.setAndShowThreshold(50, 100);

}

void setAndShowThreshold(int min, int max){

//System.out.println("min:"+min+" max:"+max);

Imgproc.threshold(imageMatGray, 

imageMatBlackAndWhite, 

min, //50, 

max, //100, 

Imgproc.THRESH_BINARY_INV

);

this.showImageFromMat(imageMatBlackAndWhite);

}

void showImageFromMat(Mat matParm){

MatOfByte byteMatBuf = new MatOfByte();

Imgcodecs.imencode(".png", matParm, byteMatBuf);

byte[] bytes = byteMatBuf.toArray();

InputStream in = new ByteArrayInputStream(bytes);

try{BufferedImage img = ImageIO.read(in);

imageLabel.setIcon(new ImageIcon(img));
//ImageIO.write(img, "png", new File(now+".png"));
repaint(); validate();
}catch(IOException ee) {

System.out.println("showImageFromMat encoutered an error:"+ ee.getMessage());

}



}

private static final long serialVersionUID = 1L;

static Font font28=new Font("serif", Font.BOLD, 28);

static SimpleDateFormat dateFormat=new SimpleDateFormat("yyyyMMdd_HHmmss");

static GridBagConstraints gc=new GridBagConstraints();  

  

  

public ThresholdSliderShow() {setSize(900, 700);

    setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);

    setLayout(new GridBagLayout());

    //gc.fill = GridBagConstraints.NONE;

    //gc.ipadx=10; gc.ipady=10;

    gc.fill = GridBagConstraints.HORIZONTAL;
    
    sliderMin=new JSlider(JSlider.HORIZONTAL, 10, 250, 50);
    sliderMax=new JSlider(JSlider.HORIZONTAL, 10, 250, 100);
    sliderMin.setMinorTickSpacing(2);
    sliderMin.setMajorTickSpacing(10);
    sliderMin.setPaintTicks(true);
    sliderMin.setPaintLabels(true);
    sliderMin.setLabelTable(sliderMin.createStandardLabels(10));
    sliderMin.addChangeListener(new ChangeListener() {
@Override public void stateChanged(ChangeEvent e) {
setAndShowThreshold(sliderMin.getValue(), sliderMax.getValue());
minMaxLabel.setText("Min:"+sliderMin.getValue()+" Max:"+sliderMax.getValue());
}
});
    
    sliderMax.setMinorTickSpacing(2);
    sliderMax.setMajorTickSpacing(10);
    sliderMax.setPaintTicks(true);
    sliderMax.setPaintLabels(true);
    sliderMax.setLabelTable(sliderMin.createStandardLabels(10));
    sliderMax.addChangeListener(new ChangeListener() {
@Override public void stateChanged(ChangeEvent e) {
setAndShowThreshold(sliderMin.getValue(), sliderMax.getValue());
minMaxLabel.setText("Min:"+sliderMin.getValue()+" Max:"+sliderMax.getValue());
}
});
    

origLabel=new JLabel(new ImageIcon("please_wait.png"));
    minMaxLabel=new JLabel("Min/Max"); //show min/max

    

    gc.gridx=0;  gc.gridy=0; gc.gridwidth=6; add(sliderMin, gc);

    gc.gridx=0;  gc.gridy=1; gc.gridwidth=6; add(sliderMax, gc);

    gc.gridx=0;  gc.gridy=2; gc.gridwidth=1; add(minMaxLabel, gc);

    gc.gridx=0;  gc.gridy=3; gc.gridwidth=1; add(origLabel, gc);
    gc.gridx=2;  gc.gridy=3; gc.gridwidth=1; add(imageLabel,gc);
    setVisible(true);  
}

}

====================

Blurrrrrrrrrr

package com.vid;

import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;

import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Size;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;

public class JustBlur {
    static ImageIcon labelIcon=new ImageIcon();
    static JLabel labelImageOrig=new JLabel(labelIcon);

public static void main(String[] args) {
    System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
    JFrame frame=new JFrame("simple");     
    Mat imageMatOrig=Imgcodecs.imread("lena.png"), imageMatBlurred=new Mat();
frame.add(labelImageOrig);
labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatOrig)); frame.pack(); //add pic and autoSize frame
frame.setVisible(true);
Imgproc.blur(imageMatOrig, imageMatBlurred, new Size(10, 10)); // remove some noise
try{Thread.sleep(2000);}catch(InterruptedException e){}
labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatBlurred)); frame.pack(); 
frame.revalidate(); frame.repaint();
}
}

====================================
BGR (blue green red) to HSV (hue sat val)

package com.vid;

import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;

import org.opencv.core.Core;
import org.opencv.core.Mat;

import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;

public class JustCvtColor {
    static ImageIcon labelIcon=new ImageIcon();
    static JLabel labelImageOrig=new JLabel(labelIcon);

public static void main(String[] args) {
    System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
    JFrame frame=new JFrame("simple");     
    Mat imageMatOrig=Imgcodecs.imread("lena.png"), imageMatColorConverted=new Mat();

labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatOrig)); frame.pack(); //add pic and autoSize frame
frame.add(labelImageOrig); frame.pack(); //add pic and autoSize frame
frame.setVisible(true);
Imgproc.cvtColor(imageMatOrig, imageMatColorConverted, Imgproc.COLOR_BGR2HSV); // convert frame to HSV
try{Thread.sleep(2000);}catch(InterruptedException e){}
labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatColorConverted)); frame.pack(); //add pic and autoSize frame
frame.revalidate(); frame.repaint();
}
}


=====================================
Useful function converts OpenCV Mat to Java BufferedImage (which can be used on ImageIcon on JLabel)
package com.vid;

import java.awt.image.BufferedImage;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;

import javax.imageio.ImageIO;

import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.imgcodecs.Imgcodecs;

public class Utils {
public static BufferedImage getJavaBufferedImageFromOpenCvMatrix(Mat matParm){
MatOfByte byteMatBuf=new MatOfByte();
System.out.println("getBufferedImage coding as png");
Imgcodecs.imencode(".png", matParm, byteMatBuf);
System.out.println("getBufferedImage bytearraying");
InputStream in=new ByteArrayInputStream(byteMatBuf.toArray());
System.out.println("getBufferedImage reading");
try{return ImageIO.read(in);}catch(IOException ioe){
System.out.println("getBufferedImage error:"+ioe);
} return null;
}
}
===================
Morph Ops Erode
package com.vid;

import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;

import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Size;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;

public class JustErode {
    static ImageIcon labelIcon=new ImageIcon();
    static JLabel labelImageOrig=new JLabel(labelIcon);

public static void main(String[] args) {
   System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
   JFrame frame=new JFrame("simple");    
   Mat imageMatOrig=Imgcodecs.imread("lena.png"), imageMatDilated=new Mat();
frame.add(labelImageOrig);
labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatOrig)); frame.pack(); //add pic and autoSize frame
frame.setVisible(true);
try{Thread.sleep(2000);}catch(InterruptedException e){}


// morph ops - dilate with large element, erode with small ones
Mat structuringElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(5, 5));
Imgproc.erode(imageMatOrig, imageMatDilated, structuringElement);
Imgproc.erode(imageMatDilated, imageMatDilated, structuringElement);

labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatDilated)); frame.pack(); 
frame.revalidate(); frame.repaint();
}
}

===================
Morph Ops Dilate
package com.vid;

import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;

import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Size;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;

public class JustDilate {
    static ImageIcon labelIcon=new ImageIcon();
    static JLabel labelImageOrig=new JLabel(labelIcon);

public static void main(String[] args) {
   System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
   JFrame frame=new JFrame("simple");    
   Mat imageMatOrig=Imgcodecs.imread("lena.png"), imageMatDilated=new Mat();
frame.add(labelImageOrig);
labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatOrig)); frame.pack(); //add pic and autoSize frame
frame.setVisible(true);
try{Thread.sleep(2000);}catch(InterruptedException e){}


// morph ops - dilate with large element, erode with small ones
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(24, 24));
Imgproc.dilate(imageMatOrig, imageMatDilated, dilateElement);
Imgproc.dilate(imageMatDilated, imageMatDilated, dilateElement);
labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatDilated)); frame.pack(); 
frame.revalidate(); frame.repaint();
}
}

===================
HSV min/max InRange
package com.vid;

import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;

import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;

public class JustInRange {
    static ImageIcon labelIcon=new ImageIcon();
    static JLabel labelImageOrig=new JLabel(labelIcon);

public static void main(String[] args) {
   System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
   JFrame frame=new JFrame("simple");  
   Mat imageMatOrig=Imgcodecs.imread("lena.png"), imageMatHsv=new Mat();

labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatOrig)); frame.pack(); //add pic and autoSize frame
frame.add(labelImageOrig); frame.pack(); //add pic and autoSize frame
frame.setVisible(true);
Imgproc.cvtColor(imageMatOrig, imageMatHsv, Imgproc.COLOR_BGR2HSV); // convert frame to HSV
//H ranges 0-180, S and V range 0-255
Scalar minValues=new Scalar(20,20,20);
Scalar maxValues=new Scalar(150,200,200);
Core.inRange(imageMatOrig, minValues, maxValues, imageMatHsv);

try{Thread.sleep(2000);}catch(InterruptedException e){}
labelIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatHsv)); frame.pack(); //add pic and autoSize frame
frame.revalidate(); frame.repaint();
}
}

===================
http://opencvlover.blogspot.com/2011/07/accesing-camera-using-opencv.html

#create window using highgui,
#capture steam form webcam via cvCaptureFromCAM
#convert it to image frames via IplImage.

#Compile/run this code to get live stream from your webcam.

#include <iostream>
#include <cv.h>
#include <highgui.h>

using namespace std;
char key;
int main()
{
    cvNamedWindow("Camera_Output", 1);    //Create window
    CvCapture* capture = cvCaptureFromCAM(CV_CAP_ANY);  //Capture using any camera connected to your system
    while(1){ //Create infinte loop for live streaming

        IplImage* frame = cvQueryFrame(capture); //Create image frames from capture
        cvShowImage("Camera_Output", frame);   //Show image frames on created window
        key = cvWaitKey(10);     //Capture Keyboard stroke
        if (char(key) == 27){
            break;      //If you hit ESC key loop will break.
        }
    }
    cvReleaseCapture(&capture); //Release capture.
    cvDestroyWindow("Camera_Output"); //Destroy Window
    return 0;
}

==========================

Pinnacle versions

Studio 10 2005
code base from Liquid Edition, now called Avid Liquid.
Real time preview at full resolution was introduced along with the ability to mix
PAL, NTSC, 4:3 and 16:9 footage on the timeline.
burn HD video to standard DVD media
that will then replay on the Toshiba HD-DVD players. Vista compatibility

Studio 11 2007.
replace Smartsound's music generation with Scorefitter, a midi-based version written in-house.
native HDV and AVCHD editing, and HD DVD from standard discs.
Keyframing became possible on most effects.
Blu-ray AVCHD to standard DVDs

Ultimate became the new top-of-the-line consumer video editing application.
It included Soundsoap PE, an advanced sound-cleaning tool,
Dolby 5.1 Surround encoding,
proDAD Vitascene, and
Moving Picture, a
precision pan and zoom.
A green-screen sheet to produce the chroma key effects

Studio 12 2008.
Enhancements editing features included
markers that can be placed on the timeline "on the fly",
improved audio controls and the addition of Montage,
which allows multi-layered video and still composites to be generated within pre-defined templates.
MOD import and
FLV output was added, as was
direct upload to YouTube.

full Blu-ray disc burning with menus.
proDAD Vitascene
replaces Soundsoap and Moving Picture with
Boris Graffiti 5.1 and
Magic Bullet Looks SE.
Content Transfer Window which allow import any plug-ins or premium content purchased for Studio 10 or 11.

Studio 14 2009
includes a few special third-party plugins.
"Studio HD Ultimate Collection", includes more third-party plugins and a
green-screen sheet (which can be also purchased separately).
The video capture interface has been redesigned,
stop-motion capture,
image stabilization effects and a
motion title editor/creator.

Studio 15 2011.
Archive/Restore,
DivX Plus HD (.MKV) Support,
Better AVCHD support (v14 did not handle large AVCHD videos with stability),
Intel Media SDK integration,
more effects, more transitions from Hollywood FX and a greater variety of menus.
reintegration of SmartSound quick tracks

Studio 16 2012.
Unique 3D Support & Optimization and
50GB of free in-app cloud storage and
file-sharing via Box

Studio 17 2013
performance boost with dramatically faster AVC video rendering,
enhanced CUDA support and a streamlined interface.
New Live Screen Capture,
support for AVCHD 2.0 and
valuable new extras from Red Giant and iZotope

Studio 18 2014
native 64-bit architecture builds,
4K video support,
XAVC S file support,
enhanced screen capture support, and
17 royalty free audio tracks. 

================================================

Chroma 

(overlay select non-green subject onto another video/image)
The Chroma key tool in Pinnacle Studio allows users to superimpose an image
  onto any video or background recorded through a green or blue screen.



The video editing software in Pinnacle Studio 12
  is used to remove the blue or green screen so that the image alone is seen on the video.

Step 1: Prepare the Green/Blue Screen Shot
 hang the big green screen/sheet on wall
set up camera so only screen is seen. (only green fills cam viewfinder)
put subject in front of screen to test the lighting and blocking.
there should be no light glares or hot spots on the screen and harsh shadows.
dont let subject wear green, else they fade into the background (unless this is part of your creative plan)

Step 2: Film/record the Screen
Step 3: Film/record the Background Image you want
Step 4: Upload/capture the Video to Pinnacle Studio.
Step 5: Edit vid, timeline tracks:
titles,
video overlays
main video footage =1st track.
Chroma key clips   =3rd track
drag vid to the video timeline.
choose/drage the Chroma key image. to 3rd track.
Step 6: Re-sizing the Image
To match the image to your video footage stage,
drag the handles which are bordering the image.
set by entering the width, height, vertical and horizontal values.
Step 7: Using the Chroma Key
Toolbox menu -> "Add Video Overlay Effects" tab -> Chroma key
A pop-up menu allows selecting Enable Chroma Keying tab and
choose the eye-dropper icon.
Click the area on the right of the video screen using the eye-dropper icon.
This will change the solid color to your new background.
Step 8: Playing the Video!

=====================================================================

PIP

The Picture-In-Picture tool in Pinnacle Studio allows showing >1 video(s) on the same screen.
   PIP, can also show multiple video shots in a single screen to simulate a simultaneous broadcast.
 
Step 1: Upload bkground + overlay Vids
Step 2: set Pinnacle Studio to Timeline mode (View menu -> Timeline tab)
Step 3: Prep Vids (both background footage and overlay footage)
Drag background vid to top video track.
drag footage you want to place into it onto the overlay track.
Step 4: Enable PIP
double-clicking overlay footage.  -> effects panel,  -> PIP icon (2nd icon from bottom on left)
Play to see the PIP effect in your videos.

Step 5: Re-Sizing the Overlay Footage w/PIP tool,
to make it smaller than the background footage.
Step 6: Creating a Split-Screen PIP Effect
PIP can also create a split-screen effect of the 2 videos.
via Crop button. - drag the overlay footage inwards
After dragging the overlay footage, you will see a box which can control the portion of your overlay frame.

Face Rect with Text

package com.vid;

import java.awt.GridBagConstraints;
import java.awt.GridBagLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;

import javax.imageio.ImageIO;
import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.xml.validation.SchemaFactoryLoader;

import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.videoio.VideoCapture;
/** read cam
 * slight delay
 * show cams
 */

public class ShowCamFacesWithText extends JFrame{
JButton startButton=new JButton("Start/Stop");
String img2="lena.jpg";
ImageIcon imgIcon=new ImageIcon(),imgIcon2=new ImageIcon();;
Mat imageMatGray=new Mat(); VideoCapture vidCap;
JLabel imageLabel=new JLabel(imgIcon),imageLabel2=new JLabel(imgIcon2);
private static final long serialVersionUID = 1L;
static GridBagConstraints gc=new GridBagConstraints();
InputStream inStream;
Scalar green=new Scalar(0, 255, 0);
public static void main(String[] args) {
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
       ShowCamFacesWithText gui=new ShowCamFacesWithText();
       Mat camFeedMatrix=new Mat();   //stores webcam frames
     
  gui.vidCap.open(0);
  if(!gui.vidCap.isOpened()){System.out.println("Not open"); return;}
  int x=0;
  while(x<1){
  try{Thread.sleep(50);}catch(Exception e){}
gui.vidCap.read(camFeedMatrix);  //read/store cam frame/image to matrix
gui.showImgAndFacesFromCamFrameMat(camFeedMatrix);
  }
  gui.vidCap.release();
  gui.vidCap=null;

}
/**
* put frame from webcam onto gui
* @param matParm image matrix
*/
void showImgAndFacesFromCamFrameMat(Mat matParm){
MatOfByte byteMatBuf=new MatOfByte();
Imgcodecs.imencode(".png", matParm, byteMatBuf);
byte[] bytes=byteMatBuf.toArray();
inStream=new ByteArrayInputStream(bytes);
//if(img2.equalsIgnoreCase("lena.jpg")) img2="2linee.png"; else img2="lena.jpg";
try{imgIcon.setImage(ImageIO.read(inStream));
//imgIcon2.setImage(ImageIO.read(new File(img2)));
Mat facesMat=getFacesImg(matParm);
imgIcon2.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(facesMat));
repaint(); validate();
}catch(IOException ee) {
System.out.println(ee.getMessage() ); ee.printStackTrace();
}
}
Mat getFacesImg(Mat inMat){
   CascadeClassifier faceDetectorXMLCascadeClassifier=
    new CascadeClassifier("lbpcascade_frontalface.xml");
   MatOfRect faceDetectionsMatRectangles=new MatOfRect();
   //detect rectangles of faces from src image
   faceDetectorXMLCascadeClassifier.detectMultiScale(inMat, faceDetectionsMatRectangles);
   //System.out.println("Number of Faces Detected:" +faceDetectionsMatRectangles.toArray().length);

        // Draw rectangle box around each face.
        for (Rect rect : faceDetectionsMatRectangles.toArray()) {
        Imgproc.rectangle(inMat,
            new Point(rect.x, rect.y),
            new Point(rect.x+rect.width, rect.y+rect.height),
            new Scalar(0, 255, 0)
         );
Imgproc.putText(inMat,
"x:"+rect.x+" y:"+rect.y,
new Point(rect.x, rect.y), //bottom left origin of text
Core.FONT_ITALIC, //int
2.0, //fontscale
green //scaler
);
        }
        return inMat;
}
public ShowCamFacesWithText(){
setSize(900, 700);
Mat imageMatOrig=Imgcodecs.imread("lena.png");
Mat imageMatOrig2=Imgcodecs.imread("lena.png");

vidCap=new VideoCapture();
setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
setLayout(new GridBagLayout());
gc.fill = GridBagConstraints.HORIZONTAL;

startButton.addActionListener(new ActionListener() {
@Override public void actionPerformed(ActionEvent e) {
  vidCap.release();   vidCap=null;
  try{Thread.sleep(100);}catch(Exception ee){}
  System.exit(0);
}
});
imgIcon.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatOrig));
imgIcon2.setImage(Utils.getJavaBufferedImageFromOpenCvMatrix(imageMatOrig2));

gc.gridx=0;  gc.gridy=0; gc.gridwidth=1; add(startButton,gc);
gc.gridx=0;  gc.gridy=1; gc.gridwidth=1; add(imageLabel,gc);
gc.gridx=1;  gc.gridy=1; gc.gridwidth=1; add(imageLabel2,gc);
setVisible(true);
}
}

Comments

Popular Posts