Apple Compressor: How to make the most of your unused CPU cores and your nearby Macs

Friday, 18 May 2018

If you have lots of video to transcode and tight deadlines, sometimes even Apple Compressor isn’t fast enough for the job. If you have a Mac with multiple cores and lots of RAM or a network of Macs going to spare, you can use this power to speed up video conversions and transcodes.

Using more cores on your Mac

If you have many CPU cores and enough RAM, you can have multiple copies (‘instances’) of Compressor run on your iMac or Mac Pro at the same time. Each copy works on a different frames of the source video.

The number of Compressor instances you can set up on a Mac depends on the number of cores and the amount of RAM installed. You need to have at least 8 cores and 4GB of RAM to have at least one additional instance of Compressor run on your Mac.

Maximum number of additional instances of Compressor that can run on a Mac:

GB RAM: 2 4 6 8 12 16 32 64
4 Cores 0 0 0 0 0 0 0 0
8 Cores 0 1 1 1 1 1 1 1
12 Cores 0 1 2 2 2 2 2 2
16 Cores 0 1 2 3 3 3 3 3
24 Cores 0 1 2 3 5 5 5 5

This means that your Mac needs to have a minimum of 8 cores and 4GB of RAM to have two instances of Compressor running at the same time. MacBook Pros (as of Spring 2018) have a maximum of 4 cores - described as ‘quad-core’ CPUs.

From Apple’s support document: Compressor: Create additional instances of Compressor:

To enable instances of Compressor

  1. Choose Compressor > Preferences (or press Command-Comma).
  2. Click Advanced.
  3. Select the “Enable additional Compressor instances” checkbox, then choose a number of instances from the pop-up menu.

Important: If you don’t have enough cores or memory, the “Enable additional Compressor instances” checkbox in the Advanced preferences pane is dimmed.

Using using additional Macs on your network

Once you install Compressor on your other Macs, you can use those Macs to help with video transcoding tasks.

To create a group of computers to transcode your videos:

  1. Set the preferences in Compressor on each Mac in the network to “Allow other computers to process batches on my computer” (in the ‘My Computer’ tab of the preferences dialog).
  2. On the Mac you want to use to control the video transcoding, use the ‘Shared Computers’ section of Compressor preferences to make a group of shared computers.
  3. Add a new untitled group (using the ‘+’ button)
  4. Name it by double-clicking it and replacing ‘Untitled’ and pressing Return.
  5. In the list of available computers (on the right), select the checkbox next to each computer that you want to add to the group.

Once this group is set up, use Compressor to set up a transcode as normal. Before clicking the Start button, click the “Process on” pop-up menu and choose the group of computers that you want to use to process your batch.

There are more details in Apple’s support document: Compressor: Transcode batches with multiple computers.

Audio for 360° video and VR experiences - Apple job postings

Monday, 14 May 2018

More from Apple’s job site. This time signs that they are looking to develop features for their applications, OSes and hardware to support spatial audio. Spatial audio allows creators to define soundscapes in terms of the relative position of sound sources to listeners. This means that if I hear someone start talking to my left, if I turn towards them, the sound should seem to come from what I'm looking at - from the front. Useful for 360° spherical audio, fully-interactive VR and experiences plus future OS user interfaces.

At the moment there are two relevant vacancies: 

Apple Hardware Engineering is looking for a Audio Experience & Prototyping Engineer

Apple’s Technology Development Group is looking for an Audio Experience and Prototyping Engineer to help prototype and define new audio UI/UX paradigms. This engineer will work closely with the acoustic design, audio software, product design, experience prototyping, and other teams to guide the future of Apple’s audio technology and experience The ideal candidate will have a background in spatial audio experience design (binaural headphone rendering, HOA, VBAP), along with writing audio supporting software and plugins.

Experience in the following strongly preferred:

  • Sound design for games or art installations
  • Writing apps using AVAudioEngine
  • Swift / Objective-C / C++
  • Running DAW software such as Logic, ProTools, REAPER, etc.

Closer to post production, Apple’s Interactive Media Group Core Audio team is looking for a Spatial Audio Software Engineer to work in Silicon Valley:

IMG’s Core Audio team provides audio foundation for various high profile features like Siri, phone calls, Face Time, media capture, playback, and API’s for third party developers to enrich our platforms. The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.

  • Key Advantage : Experience with audio engines that are part of Digital Audio Workstations or Game audio systems
  • Advantage : Experience with Spatial audio formats (Atmos, HOA etc) is desirable.

I gather that the Logic Pro digital audio workstation team are based in Germany. Apple are also looking for a Spatial Audio Software Engineer to work in Berlin. 

For iOS and macOS, Apple are also looking for a Core Audio Software Engineer in Zurich:

The team is looking for talented engineers who are passionate about building audio software products for millions of customers and care about overall user experience. You will be pushing the boundaries of spatial audio experience for future technologies.

If you think this kind of activity is too little too late, there was at least one vacancy for a Spatial Audio Software Engineer back in July 2017.

Although Apple explore many technical directions for products that never see the light of day, I expect that spatial audio has a good future at Apple.

Apple Video job postings 2018… Cloud, IP production, 3D/VR in 2019?

Saturday, 28 April 2018

A good way of seeing what Apple plans to work on is to check out their jobs site. A July 2017 job posting for a pro workflow expert to set up a studio ends up with Apple giving a journalist a tour of the lab in April 2018

Here is a round-up of recent Apple Pro Apps-related job posts. They hint as to what might be appearing in Apple’s video applications in 2019.

Many start with this description of the Apple Video Applications group:

The Video Applications group develops leading media creation apps including Memories, Final Cut Pro X, iMovie, Motion, and Clips. The team is looking for a talented software engineer to help design and develop future features for these applications.

This is an exciting opportunity to apply your experience in video application development to innovative media creation products that reach millions of users.

Senior Engineer, Cloud Applications

Job number 113527707, posted March 2, 2018:

The ideal candidate will have in-depth experience leveraging both database and client/server technologies. As such, you should be fluent with cloud application development utilizing CloudKit or other PAAS (“Platform as a Service”) platforms.

The main NLE makers have come to cloud-enabling their tools relatively late compared to other creative fields. Apple currently allow multiple people to edit the same document in iWorks at the same time. Sharing multiple-gigabytes of video data is much harder than keeping a Pages or Numbers document in sync across the internet. Avid have recently announced Amazon-powered video editing in the cloud services coming this year. It looks like Apple isn't shying away from at least exploring cloud-based editing in 2018.

Cloud features aren’t just for macOS video applications: There was an October 2017 posting for a MacOS/iOS Engineer - Video Applications (Cloud) - Job number 113167115

Senior Software Engineer, Live Video

Job number 113524253, posted February 27, 2018:

The ideal candidate will have in-depth experience leveraging video editing, compositing, compression, and broadcasting technologies.

The key phrase here is ‘Live Video’ - this could be Apple making sure their tools will be able to work in a IP-enable post workflows. Broadcasters are now connecting their hardware via Ethernet instead of the older SDI technology. Engineering this sort of thing is about keeping everything in sync - sharing streams of video across 10-Gigabit Ethernet.

I wrote about BBC R&D exploring IP production in June 2017. Recently they've been seeing how IP production could use cloud services: “Beyond Streams and Files - Storing Frames in the Cloud”.

Sr. Machine Learning Engineer - Video Apps

Job number 113524253, posted April 12, 2018:

Apple is seeking a Machine Learning (ML) technologist to help set technology strategy for our Video Applications Engineering team. Our team develops Apple's well-known video applications, including Final Cut Pro, iMovie, Memories part of the Photos app, and the exciting new Clips mobile app.

We utilize both ML and Computer Vision (CV) technologies in our applications, and are doing so at an increasing pace.

We are looking for an experienced ML engineer/scientist who has played a significant role in multiple ML implementations — ideally both in academia and in industry — to solve a variety of problems.

You will advise and consult on multiple projects within our organization, to identify where ML can best be employed, and in areas of media utilization not limited to images and video.


We expect that you will have significant software development and integration knowledge, in order to be both an advisor to, and significant developer on, multiple projects.

This follows on from a vacancy last July for a video applications software engineer ‘with machine learning experience.’ 

It looks like the Video Applications team are stepping up their investments in machine learning - expecting to use it in multiple projects: maybe different features in the different applications they work on.

One example would be improving tracking of objects in video. Instead of tracking individual pixels to hide or change a sign on the side of a moving vehicle, machine learning would recognise the changing position of the vehicle, the sign and be able to interpret the graphics and text in the sign itself.

MacOS High Sierra 10.13 introduced machine learning features in Autumn 2017. Usually Pro Apps users would need to wait at least a year to get features available in the newest version of macOS - because editors didn’t want to update their systems until the OS felt reliable enough for post production. Interesting with the Final Cut Pro 10.4.1 update, the Video Applications team have forced the issue - the current version of Final Cut (plus Motion) won’t run on macOS Sierra 10.12. At least that means new Final Cut features can start relying on new macOS features introduced last year. I wrote about Apple WWDC sessions on media in June 2017.

Senior UI Engineer, Video Applications (3D/VR)

Job number 113524287, posted February 23, 2018:

Your responsibilities will include the development and improvement of innovative and intuitive 3D and VR user interface elements. You will collaborate closely with human interface designers, and other engineers on designing and implementing the best possible user experience. The preferred candidate should have an interest and relevant experience in developing VR user interfaces.

Additional Requirements

  • Experience with OpenGL, OpenGL ES or Metal
  • Experience developing AR/VR software (SteamVR / OpenVR)
  • macOS and/or iOS development experience

Notice here that this is not a user interface engineer who will create UI for a 3D application. Apple plan to at least investigate developing 3D user interfaces that will work in VR. Although this engineer is being sought by the video applications team, who knows where else in Apple is looking for 3D interface design to be used in VR.

See also VR Jobs at Apple - July 2017.







Today’s Final Cut Pro 10.4.1, Motion 5.4.1 and Compressor 4.4.1 updates require macOS High Sierra 10.13.2

Monday, 09 April 2018

Last Thursday Apple announced that Final Cut Pro 10.4.1 would be available today

The specification page for Final Cut Pro, Motion and Compressor states that the minimum requirements have changed from macOS Sierra 10.12.6 to macOS High Sierra 10.13.2 or later. In order to get today’s free updates for Final Cut Pro, Motion and Compressor, your Mac must be running 10.13.2 or newer. You won't see these updates in the Mac App store if you are using an older version of the OS.

It is rare that Final Cut Pro needs such a relatively new version of macOS. Since 2011, the ProApps team have only required that the OS is as old as 16 months old. 

This means that Final Cut will have access to parts of macOS introduced in last year’s Apple Worldwide Developer conference - the most likely feature being added will be eGPU compatibility - as introduced in the most recent update to High Sierra. Although parts of Final Cut Pro 10.4 and earlier can be sped up by attaching an eGPU, some core parts weren't. 

If you haven't updated Final Cut Pro on your computer before, there is a support page from Apple that gives useful tips.

Cinematographers discuss ProRes RAW

Sunday, 08 April 2018

Here are some excerpts from a discussion over at CML where TV and cinema cinematographers discuss ProRes RAW. (link via today’s Tao Colorist Newsletter):

Ned Soltz:

ProRes RAW bit depth depends upon what the camera sends out. So for Varicam it would be 14 bit, for Sony FS it would be 12 bit

Mitch Gross, Panasonic Cinema product manager:

Both the EVA1 and VariCam LT RAW outputs will be supported by the Atomos recorders for ProRes RAW capture. 4K60p/2K240p at launch on Monday, EVA1 5.7K30p in May.

Scott Ferril:

I'm certain "RAW" will now permanently change to mean a Bayer pattern

James Marsden:

The point is I don't see ProRes RAW helping with any of this, and I find Almost all clients are editing in Premier or Avid […] ProRes RAW is unlikely to work on a 2012 Mac Pro.

Alexander Ibrahim:

I do expect ProRes Raw to enable some productions to move from ProRes/Rec709 to a raw workflow and HDR. […] It will matter to a lot of my productions though. R3D nearly breaks a lot of their post workflows. ProRes is easy, but a little too constraining. It will shift the industry, especially the low end and mid range, in ways we should all be excited about.

Mitch Gross:

While I agree that ProRes RAW is a pretty terrific opportunity to “bring RAW to the masses” let’s all make sure not to get too carried away. ProRes RAW may be (Apple) processor friendly, don’t forget that the files are still something like three to four times the size of something like AVC-Ultra or All-I codecs. And they’re approaching 10 times the size of a high quality LongGOP.

Alister Chapman:

I think we need to think a bit differently to how we do now. We tend to assume raw must be graded, must have a load of post work done, when really that’s not necessarily the case.

Paul Curtis:

We should not be working in 709 any more, the tail ends of the gamma curve just compress usable highlight and shadow detail, it's a delivery gamma, not a workflow one.

Also some of us need all the full range linear in post.

So if Apple had slammed down a ProRes Linear intermediate codec, with VBR and maybe a couple of quality settings and found a way to read that data in 'simple' mode for decimating the output for speed then i for one would be all over that. Basically EXR and ACES for the masses, with Piz or DW compression.

I just don't get what ProRes RAW will bring professionals

Mitch Gross:

ProRes RAW will work because it is Apple. With a single step it is supported in lord-knows-how-many-thousands of systems and a host of cameras. These cameras were like ships without a home port, wandering the seas with no effective and manageable RAW workflow. Uncompressed CinemaDNGs? The data load is ridiculous and the workflow a bit mercurial from one camera to another and one post system to another.

ProRes RAW makes it easy. It levels the playing field. All those cameras go into it and will work just fine in FCPX. Finding and applying the correct LUT is easy. Everything just works. That’s the beauty of it.

There are many great points being made, so if you want a deep dive, follow along with the evolving discussion at the Cinematography Mailing List.

Atomos CEO interview on ProRes RAW

Saturday, 07 April 2018

REDSHARK have captured an exclusive video with Jeromy Young, the CEO of Atomos to get their take on ProRes RAW. Thanks to Charles Wren for the link. 

Here are a few excerpts from the 17 minute interview:

He said that Atomos supporting ProRes RAW is the culmination of years of work. Atomos aim to supply 80% of the market – ARRI have the top-end cinema workflow sorted. They see it as their task to take best of that workflow and make it for everyone else.

Although Atomos could capture 4K60 from the RAW output of the Sony FS5…

…we couldn't do it justice when we went to ProRes. It was the right solution for 10 years ago.

We approached Apple and asked would you guys be interested in giving us a standard to go to.

CinemaDNG is about individual frames…

With ProRes RAW we're dealing with a whole video package that has metadata in it that the application can read that you can apply and transform each pixel into video to see in whatever way you want.

Jeromy believes that individual camera makers will produce plugins that run in NLEs that will make the most of the ProRes RAW that was recorded by their cameras. 

The ProRes RAW software for the Shogun Inferno and the Sumo 19 will be a free update. Because ProRes RAW file sizes are much smaller than CinemaDNG, Atomos devices can remain with SATA storage even when recording 4K60. 

He also discusses whether NLEs other than Final Cut Pro X will support ProRes RAW and how Atomos’ market aligns with Final Cut.

Final Cut Pro 10.4.1: ProRes RAW and captions

Thursday, 05 April 2018

Apple have announced the next version of Final Cut Pro X will have two features for high-end workflows. The free update will include ProRes RAW for better footage acquisition and flexible closed captioning for media distribution. It will be available from Monday April 9th from the Mac App Store.

Updated with more information on exporting using roles and Compressor 4.4.1

ProRes RAW - for Final Cut Pro 10.4.1 only

ProRes RAW provides the real-time performance and storage convenience of ProRes 442HQ and 4444 with the postproduction flexibility of camera RAW. The new proposition from Apple is effectively: “Add any camera you have into a RED-like RAW workflow with an Atomos recorder and Apple professional video applications.” This can be done now because Macs are now fast enough to work with multiple layers of camera source media in real time - instead of extracting the information from the source when mastering in a grading application.

Whereas the current family of ProRes codecs is designed for all stages of video production, Apple ProRes RAW and Apple ProRes RAW HQ are designed for acquisition. When ProRes RAW is used in Final Cut Pro 10.4.1, the output for distribution is ProRes 422 HQ or ProRes 4444 (although ProRes RAW would be a good codec for archiving ‘original camera negative’).

A camera sensor is a grid of photosites that can each only record a single red, green or blue brightness value. Footage for postproduction is made of a series of images where each pixel in the grid is made up of at brightness values for red, green and blue. At some point in the workflow, the RGB values for each pixel need to be interpolated from the brightness values of adjacent red, green and blue photosites.

In this case the RGB value of the single pixel in the video frame is based on the red brightness at its location plus green and blue values interpolated from the brightness values recorded at adjacent photosites.

ProRes RAW encodes the information captured by individual camera photosites without extrapolating RGB information for every position in the sensor array. At the point of being used in a timeline, Final Cut Pro creates a grid of RGB values by interpolating the brightness values recorded at individual photosites.

The ProRes RAW advantage is that there is more processing power in a Mac running Final Cut Pro than there is in a camera recording images on location. More processing power means the algorithm that is doing the interpolation can be more advanced. It can also be modified if needed. Cameras must bake in their pixel interpolation into the footage they record.

RAW flexibility at ProRes data rates

In practical terms, ProRes RAW gives REDCODE RAW quality at ProRes data rates. For 1 stream or REDCODE RAW 5:1 or 3 streams of Canon Cinema RAW Light, a Mac running Final Cut Pro 10.4.1 will be able to play back 7 streams of Apple ProRes RAW HQ or 8 streams of Apple ProRes RAW. Also Final Cut Pro is able to render and export ProRes RAW HQ 5-6 times faster than REDCODE RAW 3:1.

In practice you would use ProRes RAW where you used to use ProRes 422 HQ and ProRes RAW HQ where you used ProRes 4444. Because of how each RAW frame can vary, the data rates vary much more with ProRes RAW than they do with standard ProRes.

For more information on storage requirements and data rates for ProRes RAW, read the new Apple White Paper.

There will initially there will be two ways to record Apple ProRes RAW: using the Sumo 19 or Shogun Inferno on-camera recorders from Atomos or a 5K full frame Super35 Zenmuse X7 camera mounted on a DJI Inspire 2 drone.

Atomos’ ProRes RAW page.

Interesting that this new ProRes family initially only works with Apple video applications: Final Cut Pro 10.4.1, Motion 5.4.1 and Compressor 4.4.1. Could this be the start of Apple favouring their own post applications over other macOS tools.

Closed Captions - for TV, streaming services and apps

The other big new feature of Final Cut Pro 10.4.1 and Compressor 4.4.1 is the ability to import, create, edit and export closed caption text. Closed captions are the text that optionally appears at playback - be it in the Netflix application running on a set-top box, on broadcast TV, at special subtitled screenings in cinemas or in the YouTube iOS app.

Of course captioning should be done when picture and sound have been locked, but Apple have done a lot to implement this feature so it works well based on continuous changes made towards the end of postproduction.

The flexibility of Final Cut Pro X video roles means that captions in multiple formats and in multiple languages can be edited and exported from the same timeline.

Individual captions can be associated with video or audio clips in the primary storyline. This means that when these clips are edited and re-ordered, the captions move with their associated clip.

The big news is that captions can also be connected audio and video clips. That means an individual caption can be connected to the specific piece of audio that it is transcribing. So although you should start captioning your production once there is a picture and sound lock, you can start the captioning process earlier. Timeline changes made to clips in the primary storyline and connected clips will be reflected in their associated captions.

Final Cut Pro 10.4.1 works with closed captions in one of two formats: CEA-608 and ITT.

CEA-608 is the long-standing closed caption format used in US broadcast TV and on DVDs worldwide. ‘iTunes Timed Text’ captions are used in iTunes video bundles for movies and TV shows that can be bought or streamed from Apple. They are also used by Amazon Prime Video and YouTube


Captions in Final Cut Pro 10.4.1

Captions can be imported as files generated by external services or applications (using the File > Import > Captions command). .scc and .itt formats are recognised for now.

Captions can be extracted from video files with encoded captions. Add the clip to the timeline and use the Edit > Captions > Extract Captions command.

Captions in compound clips or in multcam angles can be extracted and added to their parent timeline (Edit > Captions > Extract Captions).

Add caption to the active language subrole at the playhead location using the Add Caption command (Option-C [or Control-Option-C if the caption editor is open - this means you can add a caption while editing another caption]).

An indvidual caption is shown in a language sub-role lane of the captions lanes of the timeline. You choose which captions are visible in the viewer by activating the caption video subrole in the timeline index.

To open a selected caption in the caption editor, double-click it or choose the Edit Caption command (Shift-Control-C).

Captions can be edited in a floating caption window (to use timeline navigation shortcuts such as J, K, L, I and O without entering them into the caption editor, also hold down the Control key - Control-J, Control-K etc.):

Captions are automatically checked, errors are flagged in the timeline index (you can choose to only show errors)…

or in the timeline. In this example, captions overlap, which most caption formats do not allow:

This problem can be fixed with the Edit > Captions > Resolve Overlaps command. 

For more on fixing problems with captions that would mean they would not be valid when played back, there is an Apple support document on Final Cut Pro X Caption Validation.

Once you have timed the captions for one language, to start work on another language, you can duplicate them as a new language. Select the captions you want to work with, then choose the Edit > Captions > Duplicate Captions to New Language command.

Each caption format has various formatting options. If you are happy with the style of a caption, you can use the Caption Inspector to Save Style as Default and move to another caption to Apply Default Style. 

CEA-608 captions can have more than one field on screen at once. You can use the Inspector to add and format up to three extra fields per caption:

If you have a long caption you can split in into individual captions using the Edit > Captions > Split Captions command (Control-Option-Command-C)

Conversely, you can combine consecutive captions into one longer caption using the Edit > Captions > Join Captions command. 

By default, captions are connected to the primary storyline. To connect a caption to a connected clip that overlaps the caption in the timeline:

  1. Select the caption
  2. Option-command click the connected clip you want the caption to be associated with

Captions are not supported when sharing to Facebook. If you have captions in your project, they will not appear when you share the project to Facebook.

If you want to export just the captions from a timeline, use the File > Export Captions command.

New Roles tab when sharing

To make preparing productions for distribution or for collaboration easer, Final Cut Pro 10.4.1 has a new Roles tab in the Share dialog box:

To make preparing to export easier, Final Cut will respect which roles and subroles are on or off in the timeline when sharing. 

In the Roles tab you can

  • Add an audio track to the export file
  • Choose an audio channel format for a track
  • Combine roles in a track
  • Remove roles from a track
  • Add captions to the export
  • Save a preset (Click the Roles as pop-up menu and choose Save As in the Presets section)

When you share a Master File as Separate Files, in the Roles tab you can

  • Add a track or file to the export
  • Combine roles in an output track or file
  • Remove roles from a track or file
  • Choose an audio channel format for a track (Mono, Stereo, or Surround)
  • Remove a track or file from the export
  • Save a preset (Click the Roles as pop-up menu and choose Save As in the Presets section)

It looks like you can't yet add a video and audio file and then choose which video and audio roles you want it to include. These separate files are either video or audio. 

Compressor 4.4.1

The next version of Compressor has gained some features too:

  • Import, edit and embed closed captions and subtitles (but not author captions from scratch)
  • Add optional voice narration (descriptive audio tracks) to iTunes Store packages
  • Add metadata from a XML property list file 
  • Use a movie (with optional audio) as the background to Blu-ray and DVD menus

Captions in Compressor

For those who need to add captions to finished videos, instead of using a full NLE, they can use Apple’s video distribution preparation application. 

Built-in settings and destinations support captions: “Apple Devices (in both the H.264 and HEVC codecs), ProRes, Publish to YouTube, Create DVD, and other settings that use the QuickTime Movie, MPEG-2, and MPEG-4 formats.” Note that captions are not supported when sharing to Facebook.

Standard Compressor jobs can only import a single .scc (CEA-608) or .itt (iTunes Timed Text) file. If an imported video file already contains embedded CEA-608 closed captions, Compressor adds the caption data to the job.

You can edit each caption’s text, appearance, position, animation style and timing. You can also add new captions at the time of your choice.

If you have multiple captions selected in the captions palette, you can adjust their start or end times by frames, seconds or minutes at the same time.

YouTube and Vimeo support CEA-608 captions that Compressor encodes into videos. If you use iTT subtitles, Compressor will generate a separate .itt file and will automatically upload it if you use the YouTube or Vimeo presets.


Compressor has been able to add metadata from QuickTime movie files to jobs. Version 4.4.1 can add metadata stored in XML property list files in the following metadata categories:


Using a standard set of property lists when exporting batches means that other tools that can read this metadata can make decisions based on property values (such as specific keywords).

Final Cut Pro 10.4.1 and Compressor 4.4.1 - Updates for professional workflows

Although Apple don't often let themselves be guided by external trade events, this is a rare update that seems to be prompted by NAB happening in Las Vegas next weekend. I'm not sure how many naysayers will be swayed by the inclusion of closed captions. ProRes RAW however shows that Apple is serious about trying to attract more high-end workflows to the Mac, and Final Cut Pro X specifically: “Don’t worry about good cameras with bad codecs, we have the acquisition format you need for HDR workflows. Available now in Apple pro video applications only.”

Apple’s ‘Everyone Can Create’ - we need more stories than apps

Tuesday, 27 March 2018

Since May 2017, Apple has been running its ‘Everyone Can Code’ educational programme. It provides video-based and interactive book-based coursework teachers and trainers to help people learn how to make applications in Apple’s Swift programming language. Schools and universities operate Apple-supported courses in app development

Although it is great that more people can learn software development this way, I think that the ability to know how to tell stories is a skill that a wider range of people need in their day-to-day lives.

People need to tell stories more often than they need to solve problems with app development.

At an education event in Chicago today, Apple announced that a new programme is coming: ‘Everyone Can Create.’ It does for music, film, photography and drawing what Everyone Can Code did for programming. The difference is that they are showing how using tools to create music, videos and pictures can be useful to learn a variety of subjects. 

Apple have already uploaded previews of the Video, Music, Photography and Drawing student and teacher guides for iBooks.

The moviemaking examples for students use Clips for iOS running on an iPad: 

Moviemakers don’t just shoot video clips, they put them together in a way that tells a story, documents and event, persuades, or even instructs. While photographers capture a single moment or emotion is a photo, moviemakers combine multiple images, both videos and photos, to tell a complete story.

In this activity, you'll learn some basic techniques using the Clips app to build a visual story and start thinking like a moviemaker.

The preview of the lesson guide for teachers includes how to prepare to make an interview video:

Students choose an interview topic, compose an interview script, then record an interview with a peer, family member, or other guest expert. ​
Have students follow these guiding steps:

  1. Identify your interview topic and build a short list of things you know and don’t know about it.
  2. Find a friend, family member, or community member who has experience with the topic and is willing to be interviewed.
  3. Compose a script that includes a brief introduction and at least three insightful questions you’ll ask during the interview.
  4. Choose a quiet and well-lit location to record your interview.
  5. Record an introduction to yourself, your interviewee, and the main topic.
  6. Switch to the rear camera to record your interviewee’s responses. Trim clips to keep the interview concise.
  7. Add posters to introduce or highlight big ideas. Text on posters is most effective when it’s short and sweet.
  8. Arrange clips so the finished video resembles a conversation between you and your interviewee.
  9. Share your video with friends, family, and community members.



I'm glad Apple is spending more time supporting video literacy. Those who learn to educate themselves by telling stories through film will soon learn how to tell other stories through film - both to entertain and to change their worlds.

Editors: Don’t worry about the technique of editing getting ‘too easy’

Monday, 26 March 2018

The reason why some editors don't like Final Cut Pro X is because other film makers such as directors can pick it up so quickly. Here is a (Google-translated) quote from an article on about the editing of award-winning Spanish feature film ‘Handia’:

I think there is a fear on the part of some editors to be dispossessed of their tool. A Moviola, an Avid or even Premiere requires prior knowledge. With FCPX, everything is facilitated, I would say, simplified and the editor can think "If the director can do what I do, what do I do?". To this I usually reply that our value as assemblers is not in the machine. It is true that we assume the proper development of the entire process between filming and post-production, but we are also the first spectators of the film, we are not contaminated by scriptwriting, filming, we can contribute a lot. The tool is not so important.

They think they are the people who know how to make the NLE put the film together. If collaborators can get results as quickly, what do editors bring to the project? They need to remember that editors are better at putting the film together - even if others can use the NLE as quickly.

Post tool users used to have ‘moats’ to protect themselves against too much competition: hardware cost, software cost and software difficulty. As long as these three things remained high, a less-talented editor had less to fear from competition. Now that these moats are going away, it financial background will be less of a differentiator - personal skills will make the difference.

Read about how two editors collaborated with two directors over at - in Spanish and Google-translated English.

How many flicks per frame?

Wednesday, 24 January 2018

Facebook’s Oculus division have defined a new unit of time says BBC News:

The flick has been designed to help developers keep video effects in sync, according to a description on the code-sharing site GitHub.

A flick, derived from "frame-tick", is 1/705,600,000 of a second - the next unit of time after a nanosecond.

A researcher at Oxford University said the flick wouldn't have much general impact but may help create better virtual reality experiences.

Although most people are now aiming making VR hardware that refreshes its display 90 times a second, video is available at many different frame rates. It is hard to make sure all the frame updates happen at the same time and at the right time. The small monitors inside a head-mounted display must update more often than the frame rate of the source video in order for the video to follow the speed of normal head movement. The flat frames of video being sent to the viewer’s eyes are excerpts from a larger sphere of video.

If you have spherical footage that is designed to update every 59.94th of a second on a VR headset that is being refreshed 90 times a second, the mathematics gets complicated, and errors can creep into the tens of thousands of calculations that must be done during a VR experience. This is partially because true frame rates cannot be completely captured using multiple decimal place values. The frame rate for US TV is described as 29.97 frames per second for example. The true definition of this frame rate is a division calculation: 30÷100.1×100 = 29.970029970029970029970029970029970029970029970029 and on into uncountable infinity.

The flick trick is tom come up with a small enough unit of time that goes into all common video frame rates and refresh rates without any decimal places left over. This makes the calculations much simpler. Adding and subtracting is faster than dividing. It is also more accurate - as the duration of each video frame or VR refresh update can defined as a whole number of flicks.

Here is a table of how many flicks correspond to popular video frame rates. Final Cut Pro can edit audio clips and keyframes at ‘subframe’ resolution which is 1/80th of the project frame rate. 

Flicks per


1 frame
in seconds

per frame

flicks per
fcpx subframe

US Film for TV










Worldwide TV

























VR headset refresh





PS: The highest commonly used ‘frame rate’ is that used in high-end audio: 192KHz which defines samples of audio at 192,000 fps - which is 3,675 flicks per sample.

PPS: The Facebook conversation that prompted the creation of flicks.

Apple Final Cut Pro 10.4: 360º spherical video, colour, video HDR and more

Thursday, 14 December 2017

Today’s Final Cut Pro X update adds features for both high-end professionals, those new to editing and everyone in between.

  • Professionals can now stay in Final Cut for advanced colour correction and advanced media workflows.
  • Everyone can explore 360° spherical video production - from those who have recently purchased consumer 360° cameras up to teams working on the most advanced VR video productions.
  • 10.4 includes the ability to open iMovie for iOS projects for people who want move from free editing tools to a full video, TV and film production application.

Apple has also updated Motion, their real-time motion graphics application, to version 5.3. Compressor, their video encoding and packaging application, has been updated to version 4.4.

All updates are free for existing users, and prices for new users remain the same from the Mac App Store: $299.99 for Final Cut Pro and $49.99 for both Motion and Compressor. Apple have yet to introduce subscription pricing on their professional video applications. Those who bought them from the Mac App Store in 2011 have not had to pay for any updates over the last six years.

The hardware requirements to run Apple’s professional video application remain the same, but a few features depend on them running on macOS 10.13 High Sierra: HEVC and HEIF support and attaching a VR headset. If you don’t yet need these features, Final Cut Pro, Motion and Compressor will run on macOS 10.12.4 or later.

After I cover the new 360° spherical video features, I’ll give a rundown of the rest of the 10.4 update.

360° spherical video

There is a large range of audiences for 360° spherical video:

  • The majority use phones looking at Facebook and YouTube videos to use a ‘magic window’ to turn around inside 360° - as they move the phone around in 3D space. The video displaying on the screen updates to match their position, giving the feeling of being ‘inside’ the video.
  • Those that have bought devices worn on the head that cradle phones in front of their eyes as they look around (from less than $30).
  • People with VR headsets ($200-$2000).
  • Groups of people in rooms with devices that project the video on the inside of a dome.

The rest of this section is a much shorter version of my Final Cut Pro & 360° spherical video: All you need to know page. 

Final Cut Pro 10.4 can handle spherical video with ease. It recognises footage captured by 360° cameras and spherical rigs. You can create spherical video timelines to edit 360° footage. There’s a 360° Viewer in the Final Cut interface that can be shown next to the normal view that lets you get a feel of what your audience will see when they explore the sphere of video.

To look around inside the video sphere, drag in the 360° Viewer.

VR headset support

On faster Macs running macOS High Sierra you can install the Steam VR software and attach an HTC Vive VR headset to use it to watch the video play straight from the Final Cut Pro 10.4 and Motion 5.4 timelines. Apple’s technical support document on the subject: Use a VR headset with Final Cut Pro X and Motion.

Spherical video now a peer to rectilinear video

It has been possible to work with 360° spherical video in video applications before. As they are designed to work with video in rectangles - rectilinear video - it was necessary to ‘fool’ them into working with the spheres of video that are at the core of 360° editing. This was done with specialised 360° plugins, which were applied as effects and transitions to footage in rectilinear video timelines. Although the user knew that the rectilinear footage represented spheres of video, the editing and motion graphics applications had no idea.

Apple have made spherical video a true peer of rectilinear video in Final Cut Pro 10.4 and Motion 5.4. If applications understand the nature of spherical video, existing features can be improved to do the right thing for 360° production, and new features can be added that benefit both rectilinear and spherical production.

‘Reorient’ orientation 

Media that represents spheres of video has ‘reorientation’ properties. This is useful when you want to choose which part of the sphere is is visible if the viewer is looking straight forward. When people start watching, playback starts with then facing forward. After initially looking around when the story starts, even though viewers can look anywhere in the sphere, most will spend the majority of the time looking forward, turning maybe 60° to the left or the right depending on video and audio cues.

In 10.4 you can show a Horizon overlay which marks what is straight ahead, with tick marks to show what is 90° to the left and 90° to the right (the left and right edges of the rectangular viewer define what is seen if the viewer turns 180° from the front.

There is a new Reorient transform tool for changing spherical video orientation by dragging in the viewer.

The 360° Viewer shows what is straight ahead when viewed online or in a spherical video device. Here the Reorient tool is being used to make the London bus appear straight ahead (X:0°, Y:0°, Z:0°):

This means that if the viewer is looking ahead when this shot starts, they’ll see the London bus.

Final Cut Pro 10.4 doesn’t yet convert footage from 360° cameras and rigs into spherical videos. Apple expects that editors will use the specialised software that comes with cameras to do this work - which is known as ‘stitching.’ If footage needs more advanced work done on it (such as motion tracking to steady a shaky shot and removing objects from spherical scenes), that will need to be done in applications such as Mocha VR.

Flat media inside a 3D sphere of video

Final Cut recognises spherical media and knows how it should work in a spherical timeline. It also recognises flat media, and knows what to do with it in a spherical timeline. In traditional rectilinear projects, each piece of media has X and Y position properties. This allows editors to position footage and pictures in the frame.

When flat (non-360°) media is added to a 360° spherical video project, instead of having X position, Y position and Scale properties in the ‘Transform’ panel of the clip inspector, there is an additional panel in the clip inspector: 360° Transform. This panel has properties that allow editors to position the flat media anywhere inside the video sphere. This can be defined in Spherical coordinates - two angles plus distance, or Cartesian coordinates - X, Y and Z co-ordinates (where the centre of the sphere is [0,0,0]).

Auto Orient makes sure the flat media always faces the viewer. X, Y, and Z Rotation is applied to the media after is positioned using Spherical or Cartesian co-ordinates

360° effects - Modify whole sphere

Final Cut Pro 10.4 comes with 10 360°-specific plugins. Nine of them are used to apply a graphic effect to a whole sphere of video. Here they are in the effects browser:


360° effect: 360° Patch

There is another plugin that can be used to hide parts of the sphere of video, which is useful when you need to hide the equipment (or person) that is holding the 360° camera.

In this case of this shot, I am visible to those who look straight down, because I held the camera on the end of a pole above my head. The 360° 

The result is that the whole sphere looks like this:

360° titles and generators

10.4 includes a set of titles designed for 360° - they display and animate 3D text on and off:

10.4 comes with two 360° generators:

Apple’s 360° spherical video do list

The Final Cut Pro 10.4 update is probably only first part of Apple’s 360° spherical video plan. The way they have started is designed to accommodate many future updates. I expect that the video applications team still have a long to do list:

A tough list, but Apple are best positioned of anyone to be able to deliver these features to all Final Cut Pro users. The Apple video applications team can also bring 360° spherical video to millions of people through their other applications working on Apple hardware of all kinds: iMovie for macOS, iMovie for iOS, Clips for iOS and Memories for Photos.

Not only 360° spherical video

Here is a summary of the other features in the Final Cut Pro 10.4 update - with links to the new help system on these topics:

Advanced colour correction

Choose which part of the footage to base a white balance on. A new option in the Color Balance effect (Option-Command-B). Apple help topic on manual white balance

New grading tools such as colour wheels, colour curves plus hue and saturation curves. Color Correction Overview.

New built-in camera LUTs (including support for the December 2017 RED Workflow update and software from Canon) and support for loading more camera LUTs. You can also control where in the pipeline LUTs are applied using the new Custom LUT effect. See Color Lookup Tables.

TIP: Important note pointed out by Gabriel Spaulding: Libraries do not carry LUTs that are applied using the new LUT features, so if are sharing with other editors and you use these new features, make sure you manage the LUTs to prevent seeing this message:

All colour corrections can now be animated using keyframes.

HDR video

High-dynamic-range video allows the range of brightness levels in footage, projects and exports to be much larger. This means much more detail in brighter parts of the image. Wide Color Gamut and HDR Overview and Configure library and project settings for wide gamut HDR.

There is a new button in the library inspector.

Once clicked you can set your library to be able to support media and projects with wide gamut HDR.

As well as being able to HDR properties for footage, projects and libraries, there is a new HDR Tools effect to support standards conversion. 10.4 can also generate HDR master files.

For a detailed article by someone much more expert than me on the subject of the new colour tools and HDR, read Marc Bach’s blog post.

iMovie for iOS projects

Any projects started on iMovie for iOS on an iPhone or iPad can be sent directly to Final Cut Pro 10.4 for finishing. Very useful for the many professional journalists who prepare reports on their mobile devices. See Import from iMovie for iOS

Additional video and still image formats

If 10.4 is running on macOS 10.13 High Sierra:

  • HEVC (High Efficiency Video Coding), also known as H.265, a video compression standard
  • HEIF (High Efficiency Image File Format), a file format for still images and image sequences
  • RF64, an extension to the WAV file format that allows for files larger than 4 GB


Final Cut Pro 10.4 libraries can be stored on connected NFS devices as if they were on local drives.

Retiming speed

Optical flow generation of new frames is now much faster as it has been rewritten to use Metal.

Improved Logic audio plugin UIs

The UIs have been redesigned and also been made resizable (using a 50%/75%/100% pop-up menu).




The Final Cut Pro X Logic Effects Reference site has been updated to provide help on the redesigned audio plugins.

Notes for Final Cut users

Upgrading to Final Cut Pro 10.4

As this is a major update to Final Cut, the Library format has been updated to work with the new features. Apple advises that before you install the update from the Mac App Store, you should backup your current version of Final Cut and existing libraries.

Before you update, check to see if you need to update your version of macOS. Final Cut will no longer run on macOS 10.11, but will still run on macOS 10.12.4. Apple’s detailed Final Cut Pro technical requirements.

Bits and pieces

New in Preferences: In the Editing panel, you can choose which is the default colour correction that is applied when you click the Color Inspector icon in the inspector, or press Command-6.

In the Playback panel, you can Show HDR as raw values and If frames are dropped one the VR headset, warn after playback

TIP: Control-click a clip in the browser to create a new project based on its dimensions and frame rate.

TIP: It is useful to be able to line up elements of waveforms when colour grading. To add a horizontal guide, click once anywhere in the waveform monitor.

Commands with unassigned keyboard shortcuts:

  • Add Color Board Effect
  • Add Color Curves Effect
  • Add Color Hue/Saturation Effect
  • Add Color Wheels Effect
  • Color Correction: Go to the Next Pane
  • Color Correction: Go to the Previous Pane
  • Toggle Color Correction Effects on/off

New commands:

  • Select Previous Clip - Command-Left Arrow
  • Select Next Clip - Command-Right Arrow
  • Extend Selection to Previous Clip - Control-Command-Left Arrow
  • Extend Selection to Next Clip - Control-Command-Left Arrow

For new keyboard commands associated with 360° spherical video, visit my Final Cut Pro & 360° spherical video: All you need to know page. 

360º features review: ‘Version 1.0’

Apple ‘went back to 1.0’ with Final Cut Pro X in 2011. They didn’t push Final Cut Pro 7’s 1990s software core to breaking point to accommodate new digital workflows. They imaged what kind of editing application they would make if they weren't limited by the ideas of the past. One result was that Final Cut Pro 10.0 was based around GPU rendering and multiple-core CPU processing. The kind of processing that 360° spherical video production needs.

Getting established postproduction tools to do 360° via plugins is they way people without access to the core of applications had to do it. It is a stopgap that application users will eventually want leave behind. Apple didn’t add 360° via plugins to Final Cut in a ‘do it the legacy way.’ They jumped to ‘Version 1.0’ of 360° spherical video. They answered this question: “As you have control over Final Cut Pro, how should you design 360° into its core?”

Following the Final Cut Pro 10.4 update, the Apple Video Applications team are now well placed to develop more of their products and services to support many more people who want to tell stories through 360° spherical video. For years now Final Cut Pro has been powerful enough to work on the biggest shows, yet friendly enough for the millions of people who know iMovie to make a small step towards professional production. With 10.4, that applies to 360° spherical video too. I’m looking forward to experience the stories they tell.

New iMac Pro - How much better at 360º spherical video stitching?

Tuesday, 12 December 2017

Vincent Laforet is another influencer who has has access to an iMac Pro for the last week. His blog post includes speed tests for Final Cut Pro X, DaVinci Resolve, Adobe Lightroom, RED Cine-X and Adobe Premiere. He also test to see how fast the new iMac Pro was at stitching the multiple sensor media recorded using an Insta360 Pro to a 6K stereo sphere. He compared it with his 2016 5K iMac and his recent 2017 MacBook Pro:

I processed 6K Stereo (3D) VR Insta360 PRO footage through their Insta360 Stitcher software, a 56 second clip, here were the export / processing times:

iMacPRO – 5 minutes 55 seconds

iMac – 11 minutes 09 seconds

MacBookPro 15” – 32 minutes

Read more about the computer and other results on his blog.

Video preview of iMac Pro from MKBHD - Marques Brownlee

Tuesday, 12 December 2017

Just as with the iPhone X, it looks like Apple are giving online influencers early access to new products and giving them permission to share their impressions before release. Marques Brownlee - known as MKBHD on the internet - has posted a video on the forthcoming iMac Pro. His 5.4 million subscribers are now finding out about the new Mac from Apple.

He mentions that this video was editing on the new iMac Pro in the next version on Final Cut Pro X, 10.4.

The model he's be working with for a week is the Intel Xeon W 3GHz 10-core iMac Pro with 128GB of RAM, Radeon Pro Vega 64 GPU with 16GB of RAM and 2TB storage - the ‘middle’ iMac Pro in the range.

  • The physical dimensions exactly match today’s 2017 5K iMac.
  • No access to upgrading the RAM
  • Two more Thunderbolt 3 ports (for a total of 4)
  • 10 Gigabit Ethernet
  • Geekbench iMac Pro single core: 5,468 (vs. 5,571 for 2017 iMac and 3,636 for 2013 Mac Pro)
  • Geekbench iMac Pro multi-core: 37,417 (vs. 19,667 for 2017 iMac and 26,092 for 2013 Mac Pro)
  • Storage speed: 3,000MB/s read and write
  • Fan rarely spins up and keeps cool to the touch, despite high-end workstation components
  • 8- and 10-core editions available first, you'll have to wait longer if your order an 18-core.
  • “The ideal high-end YouTuber machine”

Looks like applications that take advantage of multiple CPU cores are going to see a big difference on the iMac Pro.

Apple have announced that the orders for the new iMac Pro will start on Thursday

Soon: More audio timelines that can automatically be modified to match changes in video timelines

Wednesday, 06 December 2017

In many video editing workflows, assistant have the thankless task of making special versions of timelines that generate files for others in postproduction. A special timeline for VFX people. A special timeline for colour. A special timeline for exporting for broadcast. A special timeline for audio. Transferring timelines to other departments is called ‘doing turnovers.’

Final Cut Pro X is the professional video editing application that automates the most turnovers. It seems that Apple want to stop the need for special timelines to be created. Special timelines that can go out of sync if the main picture edit changes. Final Cut video and audio roles mean that turnovers for broadcast no longer require special timelines.

The Vordio application aims to make the manual audio reconform process go away. At the moment problems arise when video timelines change once the audio team start work one their version of the timeline. Sound editors, designers and mixers can do a great deal of work on a film and then be told that there have been changes to the picture edit.

What’s new? What’s moved? What has been deleted?

Vordio offers audio autoreconform. That’s if (when) the picture timeline changes Vordio looks at the NLE-made changes and produces a change list that can be applied to the audio timeline in the DAW. It currently does this with Final Cut Pro X and Adobe Premiere timelines. If the sound team have already made changes in Reaper (a popular alternative to ProTools) and they need to know what changes have since been made to the video edit, Vordio can make changes to the audio timeline that reflect the new video edit. This includes labelling new clips, clips that have moved and showing which clips have been deleted.

It looks like Vordio will soon work with other DAWs by using the Hammerspoon UI scripting toolkit.

StudioOne is a useful DAW that has a free version.

I expect timeline autoreconform come to all timelines. To get a preview of what it could be like, check out Vordio.

Film from a single point, then wander around inside a cloud of pixels in 3D

Monday, 04 December 2017

People wearing 360° spherical video headsets will get a feeling of presence when the small subconscious movements they make are reflected in what they say. This is the first aim of Six Degrees of Freedom video (6DoF). The scene changes as the viewer turns in three axes and moves in three axes. 6DoF video is stored as a sphere of pixels and a channel of information that defines how far each of those pixels are from the camera.

Josh Gladstone has been experimenting with creating point clouds of pixels. His 4th video in a series about working on a sphere of pixels plus depth shows him wondering around a 3D environment that was captured by filming from a single point.

The scenes he uses in his series were filmed on an GoPro Odyssey camera. The footage recorded by its 16 sensors was then processed by the Google Jump online service to produce a sphere of pixels plus a depth map.

The pixels that are closest to the camera have the brighter corresponding pixels in the depth map.

360° spherical video point clouds are made up of a sphere of pixels whose distance from the centre point have been modified based on a depth map.

Josh has written scripts in Unity - a 3D game development environment - that allow real-time rendering of these point clouds. Real time is important because users will expect VR headsets to be able to render in real time as they turn their heads and move around inside virtual spaces.

You can move around inside this cloud of pixels filmed from a single point:

In the latest video in his series Josh Gladstone simulates how a VR headset can be used to move around inside point clouds generated from information captured by 360° spherical video camera rigs. He also shows how combining multiple point clouds based on video taken from multiple positions could be the basis of recording full 3D environments:

What starts as an experiment in a 3D game engine is destined to be in post production applications like Apple’s Motion 5 and Adobe After Effects, and maybe eventually in NLEs like Final Cut Pro X.

I’m looking forward to playing around inside point clouds.

28 videos, 53 million views (so far) - advice for your video essay YouTube channel

Sunday, 03 December 2017

Every Frame a Painting is a YouTube channel made up of video essays about visual storytelling. It has 1.3 million subscribers and millions of views. The creators Taylor Ramos and Tony Zhou have decided to close close it. Luckily for us they have written an essay on what they learned - including tips for others considering making videos in this form.

All the videos were made with Final Cut Pro X:

Every Frame a Painting was edited entirely in Final Cut Pro X for one reason: keywords.

The first time I watch something, I watch it with a notebook. The second time I watch it, I use FCPX and keyword anything that interests me.

Keywords group everything in a really simple, visual way. This is how I figured out to cut from West Side Story to Transformers. From Godzilla to I, Robot. From Jackie Chan to Marvel films. On my screen, all of these clips are side-by-side because they share the same keyword.

Organization is not just some anal-retentive habit; it is literally the best way to make connections that would not happen otherwise.

Even if you don't make scholarly videos on the nature of visual storytelling, there is a lot to be learnt from their article and the 28 video essays in their channel.

iPhone-mounted camera will capture 3D environments that can be fully explored in VR

Friday, 01 December 2017

Photogrammetry is the method of capturing a space in 3D using a series of still photos. It usually requires a great deal of complex computing power. A forthcoming software update for the $199 Giroptic iO (a 360° spherical video camera you mount onto your iPhone or Android phone) will give users of the  the ability to capture full VR models of the spaces they move through.

Mic Ty of 360 Rumors writes:

the photographer simply took 30 photos, then uploaded them to cloud servers for processing. The software generates the 3D model, and can even automatically remove the photographer from the VR model, even though the 360 photos had the photographer in them.

Once the model is generated it can be included in full VR systems that can be explored in VR headsets. This will work especially well in devices such as the HTC Vive, which can detect where you are in 3D space and move the 3D model in VR to match. Remember though that many VR experiences are about interactivity, and in order to add that to a 3D environment, users will have to use a VR authoring system.

3D environments in post production applications

For those making 360° spherical videos, it is likely that they will want their post tools to be able to handle the kind of 3D models generated by systems like these. Storytellers range from animators (users of applications like Blackmagic Fusion) to editors and directors (users of Final Cut Pro X and Adobe Premiere). Developers should bear in mind the way they integrate 3D environments in post applications should vary based on the nature of the storyteller.

However, it looks like there'll be a new skill to develop for 360° spherical photographers: where to take pictures in a space to capture the full environment in 3D.

Go over to 360 Rumors to see a video of the system in action.


Amazon launches Rekognition Video content tagging for third-party applications

Thursday, 30 November 2017

Amazon have announced an content recognition service that developers can use to add features to their video applications, Streaming Media reports:

Rekognition Video is able to track people across videos, detect activities, and identify faces and objects. Celebrity identification is built in. It identifies faces even if they're only partially in view, provides automatic tagging for locations and objects (such as beach, sun, or child), and tracks multiple people at once. The service goes beyond basic object identification, using context to provide richer information. The service is available today.

The videos need to be hosted in or streamed via Amazon S3 storage.

Apple are unlikely to incorporate Amazon Rekognition Video in their video applications and services. Luckily the Final Cut Pro X and Adobe Premiere ecosystems allow third-party developers to create tools that use this service. Post tools makers can then concentrate on integrating their workflow with their NLE while Amazon invest in improving the machine learning they can apply to video.

4K: Only the beginning for UK’s Hangman Studios’ Final Cut Pro X productions

Thursday, 30 November 2017

Some think that Final Cut Pro X has problems working with 8K footage. Hangman Studios has been making concert films with this workflow since 2015. There’s a new case study by Ronny Courtens of Lumaforge at

Two years ago I made a conscious decision to get rid of all of my HD cameras. We decided that everything from now on had to be 4K and up.

…our boutique post production services in London are newly designed and built for 8K workflows and high end finishing. Drawing upon 17 years of broadcast post experience we've designed a newer, more simplified and efficient workflow for the new age of broadcast, digital and cinema. We’re completely Mac based running a mix of older MacPro 12-cores (mid 2010) with the newer MacPro (2013) models.

I imagine there’ll be space in their West London studios for at least one new iMac Pro. When Apple gave a sneak preview of Final Cut Pro 10.4 and Motion 5.4 as part of the FCPX Creative Summit at the end of October, they showed in easily running an 8K timeline on a prerelease iMac Pro.

Apple have said that Final Cut Pro X 10.4 will able to support 8K HEVC/H.265 footage on macOS High Sierra. This kind of media is produced by 360º spherical video systems such as the Insta360 Pro. When 10.4 comes out in December, editors will be able to do even more at high resolutions.

What is ‘Six Degrees of Freedom’ 360° video?

Sunday, 26 November 2017

Six Degrees of Freedom – or 6DoF – is a system of recording scenes that when played back allow the viewer to change their view using six kinds (‘degrees’) of movement. Today common spherical video recoding uses multiple sensors attached to a spherical rig to record everything that can be seen from a single point. This means when the video is played, the viewer can…

  • turn to the left or right
  • look up or down
  • twist their head to rotate their view

…as look around inside a sphere of video.

If information has been recorded from two points close together, we perceive depth - a feeling of 3D known to professionals as ‘stereoscopic video.’ This feeling of depth applies as long as we don't twist our heads too much or look up or down too far - because ‘stereo 360°’ only captures information on the horizontal plane. 

6DoF camera systems record enough information so that three more degrees of movement are allowed. Viewers can now move their heads

  • up and down
  • left and right
  • back and forward

…a short distance.

As information about the environment can be calculated from multiple positions near the camera rig, the stereoscopic effect of perceiving depth also will apply when viewers look up and down as well as when they rotate their view.

Here is an animated gif taken from a video of a session about six degrees of freedom systems given at the Facebook developer conference in April 2017:

Six degrees of freedom recording systems must capture enough information that the view from all possible eye positions within six degrees of movement can be simulated on playback. 

A great deal of computing power is used to analyse the information coming from adjacent sensors to estimate the distance of each pixel captured in the environment. This process is known as ‘Spherical Epipolar Depth Estimation.’ The sensors and their lenses are arranged so that each object in the environment around the camera is captured by multiple sensors. Knowing the position in 3D space of the sensors and the specification of their lenses means that the distance of a specific object from the camera can be estimated.

6DoF: simulations based everything you can see from a single point… plus depth

Post-processing the 6DoF camera data results in a single spherical video that includes a depth map. A depth map is a greyscale image that stores an estimated distance for every pixel in a frame of video. Black represents ‘as close as can be determined’ and white represents ‘things too far away for us to determine where they are relative to each other - usually 10s of metres away (this distance can be increased by positioning the sensors further apart or by increasing their resolution).

Once there is a sphere with a depth map, the playback system can simulate X, Y and Z axis movement by moving pixels further away more slowly than pixels that are closer as the viewer moves their head. Stereoscopic depth can be simulated by sending slightly different images to each eye based on how far away each pixel is.

Moving millimetres, not metres

The first three degrees of environment video freedom - rotate - allow us to look at anywhere from a fixed point. 360° to the left or right and 180° up and down. The next three allow is to move our heads a little: a few millimetres along the X, Y and Z axes. They do not yet let us move our bodies around an environment. The small distances that the three ‘move’ degrees of freedom allow make a big difference to the feeling of immersion, because playback can now respond to the small subconscious movements we make in day to day real life when assessing where we are and what is around us.